Grid-related related Activities around KISTI Tier2 Center for ALICE

Size: px
Start display at page:

Download "Grid-related related Activities around KISTI Tier2 Center for ALICE"

Transcription

1 2010 APCTP LHC Konkuk Univ. Grid-related related Activities around KISTI Tier2 Center for ALICE August 10, 2010 Soonwook Hwang KISTI 1

2 Outline Introduction to KISTI KISTI ALICE Tier2 Center Development of PROOF 2

3 Introduction to KISTI

4 Who we are? Location located in Daejeon Organization 3 centers 10 departments 5 branch offices Personnel About 350 regular staffs 150 part-time workers Annual Revenue About 100M USD mostly funded by a government

5 We are a pioneer in building Cyber-Infrastructure of Korea! a Government funded Research Institute to serve Scientists and Engineers in Korea with Scientific/Industrial Database, Research Network & Supercomputing Infrastructure Information Analysis Center Knowledge Information Center S&T Information Resource & Service Technological & Industrial Information Analysis Supercomputing service High Performance R&D Network service Supercomputing Center

6 National Research Network (KREONET) SuperSIRe N Incheo n Suwon 10 40Gbps Cheonan Jeonju Kwangju Seou l 20Gbps 10 Gbps Daejeon 5Gbps Changwon Kangneu ng Ochang 10 Gbps Daegu Pohang 10 Gbps Busan KREONET is the national science & research network of Korea, funded by MEST since 1988 Around 200 linked organizations - Universities, Government labs, Industry labs GLORIAD for international connections 14 regional centers 2.5 Gbps Jeju

7 GLORIAD 10G Networks OC192 KRLight OC192 US Amsterdam Moscow Russia (Novosibirsk) China (Beijing) Hong Kong, CN 2*10G KREONET 10G (KR-US) Seattle, US CANARIE, Canada Seattle Calgary Toronto NYC Chicago EU CNIC, China (Hong Kong) HKLight OC192 10G (KR-HK) Korea KISTI, Korea (Daejeon) Korea OC192 GLO-KR/ CANARIE nodes (L1, L2) 10GE OC192 StarLight PacificWave GLIF node (CANARIE, PW) CSTNET

8 KISTI s 4 th Supercomputer Cluster System (1 st phase) SUN C48 Target Performance : 24 Tflops Infiniband 4x DDR 20Gbps External Storage : 200TBytes 2 nd phase SUN About 300 TFlops Target performance About 21,000 CPU cores 1.0 PBytes external storage Expected to be delivered in the early 2010 SMP System IBM p595 & p6: 10 (1 st ), 24 nodes(2 nd ) Target Performance:36TFlops Internal Disk : 1,17 GB External Storage : 63TB(1 st ), 273 TB(2 nd ) HPS(1 st ) interconnect network & Infiniband 4x DDR (2 nd )

9 KISTI ALICE Tier2 Centre

10 Brief History of KISTI-ALICE Collaboration September, 2006 Dr. Federico Carminati visited KISTI to discuss the construction of ALICE Tier2 Center in KISTI February, 2007 Approved as an ALICE-LCG site October, 2007 WLCG MoU signed between KICOS and CERN KISTI becoming an official Tier2 site for ALICE experiment July, 2008 International Summer School on Grid computing and e-science Tutorial on ALICE computing (AliEn, ROOT, PROOF) October, 2009 KISTI developers visited CERN ALICE Computing Group to extend our collaboration to include development work including PROOF for ALICE computing framework

11 Pledged Resources WLCG MoU signed on October 23, 2007 Signing Party CERN : Dr. Jos Engelen (CSO of CERN) Funding Agency : Chun Il Eom (KICOS) ALICE experiment support 11

12 ALICE Tier2 Centre ALICE Services & Computing Resources ALICE-related services RB, WMS+LB, PX, RGMA, BDII, VOMS, LFC, VOBOX, UI, VOMS LCG-CE Removed since the end of CREAM-CE 128 CPU cores (HP Blade Server 16 nodes) Dedicated to the ALICE experiment SE 30 TB disk space with 10Gbps international connection At first, DPM (Disk Pool Manager) used as Storage Management System Since early September, a pure xrootd server has been up and running 12

13 glite Middleware Services Deployed in KISTI ALICE T2 Center 13

14 ALICE Tier2 Centre Site Overview 14

15 Active jobs on the KISTI CREAM-CE CE

16 Status of Storage Element Status of KISTI SE Size: 28TB Used: 17.26TB no of files: 356,702

17 KISTI s Contribution to ALICE Computing Total Wall Time for ALICE Last Week: 1.21% 17

18 KISTI s contribution to ALICE computing 1.2% contribution to ALICE computing in the total job execution Processing near 8000 jobs per month in average 1.2 %

19 PROOF Parallel ROOt Facility PROOF is part of ROOT If you have ROOT installed on your desktop/laptop, you have PROOF (prooflite) as well If you have clusters, you need to manually configure/enable them PROOF is ROOT on a cluster of computers (approximate) Input files are processed on the set of computer(s) (masters +workers) in parallel and results are sent to back to you Where to use PROOF Central Analysis Facilities (CAF) Department workgroup facilities desktops and lap-tops with multi-cores and/or multi disks PROOF facilities ALICE: CAF at CERN, Switzerland SKAF at Bratislava, Slovak LAF at Lyon, France KISTI test, development farm

20 Trivial / Ideal Parallelism

21 The PROOF Approach Cluster perceived as an extension of local PC More dynamic use of resources Real-time feedback Automatic splitting and merging

22 Dynamic Load Balancing Pull architecture guarantees scalability Adapts to performance variations

23 Some other PROOF features Background / asynchronous running Real-time feedback Can define a set of objects to be sent back Par files User defined packages Automatic merge for user-defined class (if user defines Merge method) PROOF itself is not related to Grid Can access Grid files Plan on Proof sessions on the Grid

24 PROOF architecture

25 PROOF Benchmark Suite

26 Objectives Provide a library to measure/tune performance of PROOF cluster for administrator and developers With the suite, optimal configuration parameters of the cluster could be determined Large scale cluster, multi-master cluster, with many-cores, large disks/files, and many users

27 Tasks CPU-bound test Test pure CPU processing and merging 1D, 2D, 3D histograms Summary plot Event rate (#events/sec) IO-bound test Event generation $ROOTSYS/test/Event class A few different file distribution and processing strategies available Summary plots Event rate (#events/sec) IO rate (MB/sec) ALICE Analysis Workship for Asian Communities, Hiroshima Univ., Japan

28 PROOF Cluster at KISTI Proof cluster at KISTI Software development/test 40 cores (=5 nodes, 8 cores/node) ~60 GB/node 1 Gbps NIC Memory: 16 GB/node PROOF/xrootd/cmsd, AliRoot, etc on SLC

29 Status and Plan ProofBench SVN: A few improvements planned Couples of quick (non-proper) solutions Bug fixes Documentation Test on large-scale system (ex. CERN CAF) One single steering class for user s convenience Next ROOT release will include it Next PROOF development Multi-master PROOF Combine multiple PROOF sites with a unique entry point Enable PROOF on the Grid

30 30

31 Three Centers of KISTI Knowledge Information Center Establishment of digital information dissemination infrastructure by developing science and technology Information resources Information Analysis Center Provision of information analysis on domestic and foreign science and technology trend, prospecting research area and technology market Supercomputing Center Establishment of world-class supercomputing environment, R&D supporting infrastructure and global S&T collaboration research network

Status of KISTI Tier2 Center for ALICE

Status of KISTI Tier2 Center for ALICE APCTP 2009 LHC Physics Workshop at Korea Status of KISTI Tier2 Center for ALICE August 27, 2009 Soonwook Hwang KISTI e-science Division 1 Outline ALICE Computing Model KISTI ALICE Tier2 Center Future Plan

More information

KRLight and GLIF Activities

KRLight and GLIF Activities KRLight and GLIF Activities 6 th Annual Global LambdaGrid Workshop Tokyo, Japan September 11-12, 2006 JongUk Kong Supercomputing Center, KISTI kju@kisti.re.kr Topics > GLORIAD-KR/KRLight Network > Services

More information

Ki-Sung Yu. Supercomputing Center, KISTI. Introduction of KREONET. Security Activities on KREONET CERT-KREONET. S&T-SEC Activities

Ki-Sung Yu. Supercomputing Center, KISTI. Introduction of KREONET. Security Activities on KREONET CERT-KREONET. S&T-SEC Activities Freedom and Security in Computer s Contents CERT- and S&T- SEC Activities 2006. 01. 14 Ki-Sung Yu Supercomputing Center, KISTI Introduction of Security Activities on CERT- 1 2 Overview Korea s national

More information

Grid and Cloud Activities in KISTI

Grid and Cloud Activities in KISTI Grid and Cloud Activities in KISTI March 23, 2011 Soonwook Hwang KISTI, KOREA 1 Outline Grid Operation and Infrastructure KISTI ALICE Tier2 Center FKPPL VO: Production Grid Infrastructure Global Science

More information

The Development of Modeling, Simulation and Visualization Services for the Earth Sciences in the National Supercomputing Center of Korea

The Development of Modeling, Simulation and Visualization Services for the Earth Sciences in the National Supercomputing Center of Korea icas2013 (September 9~12, Annecy/France) The Development of Modeling, Simulation and Visualization Services for the Earth Sciences in the National Supercomputing Center of Korea Min-Su Joh, Joon-Eun Ahn,

More information

Computing for LHC in Germany

Computing for LHC in Germany 1 Computing for LHC in Germany Günter Quast Universität Karlsruhe (TH) Meeting with RECFA Berlin, October 5th 2007 WLCG Tier1 & Tier2 Additional resources for data analysis - HGF ''Physics at the Terascale''

More information

e-science for High-Energy Physics in Korea

e-science for High-Energy Physics in Korea Journal of the Korean Physical Society, Vol. 53, No. 2, August 2008, pp. 11871191 e-science for High-Energy Physics in Korea Kihyeon Cho e-science Applications Research and Development Team, Korea Institute

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

ATLAS Tier-3 UniGe

ATLAS Tier-3 UniGe ATLAS Tier-3 cluster @ UniGe Luis March and Yann Meunier (Université de Genève) CHIPP + CSCS GRID: Face To Face meeting CERN, September 1st 2016 Description of ATLAS Tier-3 cluster at UniGe The ATLAS Tier-3

More information

Austrian Federated WLCG Tier-2

Austrian Federated WLCG Tier-2 Austrian Federated WLCG Tier-2 Peter Oettl on behalf of Peter Oettl 1, Gregor Mair 1, Katharina Nimeth 1, Wolfgang Jais 1, Reinhard Bischof 2, Dietrich Liko 3, Gerhard Walzel 3 and Natascha Hörmann 3 1

More information

What is Grid? ? R VO C VO A VO B. Ref. Paul Avery

What is Grid? ? R VO C VO A VO B. Ref. Paul Avery I II III IV V VI VII ef. Paul Avery What is Grid? VO A? VO C?? VO B? Korea e-science Project Innovation of &D environment and international competitive power in Science apid changes in paradigm of S&T

More information

ARC integration for CMS

ARC integration for CMS ARC integration for CMS ARC integration for CMS Erik Edelmann 2, Laurence Field 3, Jaime Frey 4, Michael Grønager 2, Kalle Happonen 1, Daniel Johansson 2, Josva Kleist 2, Jukka Klem 1, Jesper Koivumäki

More information

where the Web was born Experience of Adding New Architectures to the LCG Production Environment

where the Web was born Experience of Adding New Architectures to the LCG Production Environment where the Web was born Experience of Adding New Architectures to the LCG Production Environment Andreas Unterkircher, openlab fellow Sverre Jarp, CTO CERN openlab Industrializing the Grid openlab Workshop

More information

IEPSAS-Kosice: experiences in running LCG site

IEPSAS-Kosice: experiences in running LCG site IEPSAS-Kosice: experiences in running LCG site Marian Babik 1, Dusan Bruncko 2, Tomas Daranyi 1, Ladislav Hluchy 1 and Pavol Strizenec 2 1 Department of Parallel and Distributed Computing, Institute of

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

HD Video Activities in Korea

HD Video Activities in Korea HD Video Activities in Korea JongWon Kim, Ph.D. APAN HDTV WG Meeting @ Taipei, Taiwan Mar. 29 th, 2005 Networked Media Laboratory Dept. of Information & Communications Gwangju Institute of Science & Technology

More information

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:

More information

The ATLAS Tier-3 in Geneva and the Trigger Development Facility

The ATLAS Tier-3 in Geneva and the Trigger Development Facility Journal of Physics: Conference Series The ATLAS Tier-3 in Geneva and the Trigger Development Facility To cite this article: S Gadomski et al 2011 J. Phys.: Conf. Ser. 331 052026 View the article online

More information

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk

Challenges and Evolution of the LHC Production Grid. April 13, 2011 Ian Fisk Challenges and Evolution of the LHC Production Grid April 13, 2011 Ian Fisk 1 Evolution Uni x ALICE Remote Access PD2P/ Popularity Tier-2 Tier-2 Uni u Open Lab m Tier-2 Science Uni x Grid Uni z USA Tier-2

More information

Deploying virtualisation in a production grid

Deploying virtualisation in a production grid Deploying virtualisation in a production grid Stephen Childs Trinity College Dublin & Grid-Ireland TERENA NRENs and Grids workshop 2 nd September 2008 www.eu-egee.org EGEE and glite are registered trademarks

More information

Geoffrey Dengate and Dr. P.T. Ho. 8 th July, Agenda. The University of Hong Kong (HKU) JUCC and HARNET

Geoffrey Dengate and Dr. P.T. Ho. 8 th July, Agenda. The University of Hong Kong (HKU) JUCC and HARNET HARNET Collaboration and Grid-related related Initiatives at HKU Geoffrey Dengate and Dr. P.T. Ho 8 th July, 2009 1 Agenda (HKU) JUCC and HARNET Service-oriented oientedviews sofgid Grid Computing Initiatives

More information

ORACLE Linux / TSC.

ORACLE Linux / TSC. ORACLE Linux / TSC Sekook.jang@oracle.com Unbreakable Linux Unbreakable Support Unbreakable Products Unbreakable Performance Asianux Then. Next? Microsoft Scalability 20 User Workgroup Computing Microsoft

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

Support for multiple virtual organizations in the Romanian LCG Federation

Support for multiple virtual organizations in the Romanian LCG Federation INCDTIM-CJ, Cluj-Napoca, 25-27.10.2012 Support for multiple virtual organizations in the Romanian LCG Federation M. Dulea, S. Constantinescu, M. Ciubancan Department of Computational Physics and Information

More information

Batch Services at CERN: Status and Future Evolution

Batch Services at CERN: Status and Future Evolution Batch Services at CERN: Status and Future Evolution Helge Meinhard, CERN-IT Platform and Engineering Services Group Leader HTCondor Week 20 May 2015 20-May-2015 CERN batch status and evolution - Helge

More information

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I. System upgrade and future perspective for the operation of Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Sakamoto and I. Ueda International Center for Elementary Particle Physics, The University of Tokyo

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

The INFN Tier1. 1. INFN-CNAF, Italy

The INFN Tier1. 1. INFN-CNAF, Italy IV WORKSHOP ITALIANO SULLA FISICA DI ATLAS E CMS BOLOGNA, 23-25/11/2006 The INFN Tier1 L. dell Agnello 1), D. Bonacorsi 1), A. Chierici 1), M. Donatelli 1), A. Italiano 1), G. Lo Re 1), B. Martelli 1),

More information

Asia Pacific and European initiatives in HPC and PRP - South Korea

Asia Pacific and European initiatives in HPC and PRP - South Korea National Institute of Supercomputing and Networking SCAsia2018@Singapore Asia Pacific and European initiatives in HPC and PRP - South Korea Jeonghoon Moon 27 th Mar. 2018 National Institute of Supercomputing

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland

The LCG 3D Project. Maria Girone, CERN. The 23rd Open Grid Forum - OGF23 4th June 2008, Barcelona. CERN IT Department CH-1211 Genève 23 Switzerland The LCG 3D Project Maria Girone, CERN The rd Open Grid Forum - OGF 4th June 2008, Barcelona Outline Introduction The Distributed Database (3D) Project Streams Replication Technology and Performance Availability

More information

PROOF-Condor integration for ATLAS

PROOF-Condor integration for ATLAS PROOF-Condor integration for ATLAS G. Ganis,, J. Iwaszkiewicz, F. Rademakers CERN / PH-SFT M. Livny, B. Mellado, Neng Xu,, Sau Lan Wu University Of Wisconsin Condor Week, Madison, 29 Apr 2 May 2008 Outline

More information

ALICE Grid Activities in US

ALICE Grid Activities in US ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions

More information

9th CEF Networks Workshop KREONET optical network and quantum communication testbed

9th CEF Networks Workshop KREONET optical network and quantum communication testbed 9th CEF Networks Workshop 2017 KREONET optical network and quantum communication testbed Contents 1. KREONET and Optical Footprint 2. Quantum Key Distribution in KREONET 3. KREONET s Flexgrid Network Plan

More information

DESY. Andreas Gellrich DESY DESY,

DESY. Andreas Gellrich DESY DESY, Grid @ DESY Andreas Gellrich DESY DESY, Legacy Trivially, computing requirements must always be related to the technical abilities at a certain time Until not long ago: (at least in HEP ) Computing was

More information

PROOF Benchmark on Different

PROOF Benchmark on Different PROOF Benchmark on Different Hardware Configurations Mengmeng gchen, Annabelle Leung, Bruce Mellado, Neng Xu and Sau Lan Wu University of Wisconsin-Madison PROOF WORKSHOP 2007 Special thanks to Catalin

More information

Grid Computing at Ljubljana and Nova Gorica

Grid Computing at Ljubljana and Nova Gorica Grid Computing at Ljubljana and Nova Gorica Marko Bračko 1, Samo Stanič 2 1 J. Stefan Institute, Ljubljana & University of Maribor 2 University of Nova Gorica The outline of the talk: Introduction Resources

More information

Beob Kyun KIM, Christophe BONNAUD {kyun, NSDC / KISTI

Beob Kyun KIM, Christophe BONNAUD {kyun, NSDC / KISTI 2010. 6. 17 Beob Kyun KIM, Christophe BONNAUD {kyun, cbonnaud}@kisti.re.kr NSDC / KISTI 1 1 2 Belle Data Transfer Metadata Extraction Scalability Test Metadata Replication Grid-awaring of Belle data Test

More information

Edinburgh (ECDF) Update

Edinburgh (ECDF) Update Edinburgh (ECDF) Update Wahid Bhimji On behalf of the ECDF Team HepSysMan,10 th June 2010 Edinburgh Setup Hardware upgrades Progress in last year Current Issues June-10 Hepsysman Wahid Bhimji - ECDF 1

More information

HEP Grid Activities in China

HEP Grid Activities in China HEP Grid Activities in China Sun Gongxing Institute of High Energy Physics, Chinese Academy of Sciences CANS Nov. 1-2, 2005, Shen Zhen, China History of IHEP Computing Center Found in 1974 Computing Platform

More information

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006

ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 GRID COMPUTING ACTIVITIES AT BARC ALHAD G. APTE, BARC 2nd GARUDA PARTNERS MEET ON 15th & 16th SEPT. 2006 Computing Grid at BARC Computing Grid system has been set up as a Test-Bed using existing Grid Technology

More information

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy Why the Grid? Science is becoming increasingly digital and needs to deal with increasing amounts of

More information

Operating the Distributed NDGF Tier-1

Operating the Distributed NDGF Tier-1 Operating the Distributed NDGF Tier-1 Michael Grønager Technical Coordinator, NDGF International Symposium on Grid Computing 08 Taipei, April 10th 2008 Talk Outline What is NDGF? Why a distributed Tier-1?

More information

UW-ATLAS Experiences with Condor

UW-ATLAS Experiences with Condor UW-ATLAS Experiences with Condor M.Chen, A. Leung, B.Mellado Sau Lan Wu and N.Xu Paradyn / Condor Week, Madison, 05/01/08 Outline Our first success story with Condor - ATLAS production in 2004~2005. CRONUS

More information

Computing / The DESY Grid Center

Computing / The DESY Grid Center Computing / The DESY Grid Center Developing software for HEP - dcache - ILC software development The DESY Grid Center - NAF, DESY-HH and DESY-ZN Grid overview - Usage and outcome Yves Kemp for DESY IT

More information

HONG KONG OPEN EXCHANGE (HKOX) UPDATES. Wai Man CHEUNG, Francis FONG Joint Universities Computer Centre Limited (JUCC) 8 August 2018

HONG KONG OPEN EXCHANGE (HKOX) UPDATES. Wai Man CHEUNG, Francis FONG Joint Universities Computer Centre Limited (JUCC) 8 August 2018 HONG KONG OPEN EXCHANGE (HKOX) UPDATES Wai Man CHEUNG, Francis FONG Joint Universities Computer Centre Limited (JUCC) 8 August 2018 HKOX Initiatives Global Network Architecture (GNA) Vision: Sustainable

More information

The Legnaro-Padova distributed Tier-2: challenges and results

The Legnaro-Padova distributed Tier-2: challenges and results The Legnaro-Padova distributed Tier-2: challenges and results Simone Badoer a, Massimo Biasotto a,fulviacosta b, Alberto Crescente b, Sergio Fantinel a, Roberto Ferrari b, Michele Gulmini a, Gaetano Maron

More information

Intruduction of KOREN (KOrea advanced REsearch Network)

Intruduction of KOREN (KOrea advanced REsearch Network) Intruduction of KOREN (KOrea advanced REsearch Network) 2015.10. KIM SE JIN KOREN NOC noc@koren.kr National Information Society Agency Contents Ⅰ KOREN in Brief Ⅱ III Ⅳ KOREN Network TEIN collaboration

More information

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project

Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Data Grid Infrastructure for YBJ-ARGO Cosmic-Ray Project Gang CHEN, Hongmei ZHANG - IHEP CODATA 06 24 October 2006, Beijing FP6 2004 Infrastructures 6-SSA-026634 http://www.euchinagrid.cn Extensive Air

More information

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center

Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center Constant monitoring of multi-site network connectivity at the Tokyo Tier2 center, T. Mashimo, N. Matsui, H. Matsunaga, H. Sakamoto, I. Ueda International Center for Elementary Particle Physics, The University

More information

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February

CMS Belgian T2. G. Bruno UCL, Louvain, Belgium on behalf of the CMS Belgian T2 community. GridKa T1/2 meeting, Karlsruhe Germany February UCL, Louvain, Belgium on behalf of the community GridKa, Karlsruhe Germany February 2 2006 community Institutes in CMS (40 physicists) UA (Antwerp) UCL (Louvain-La-Neuve) ULB (Brussels) UMH (Mons) VUB

More information

Interconnect EGEE and CNGRID e-infrastructures

Interconnect EGEE and CNGRID e-infrastructures Interconnect EGEE and CNGRID e-infrastructures Giuseppe Andronico Interoperability and Interoperation between Europe, India and Asia Workshop Barcelona - Spain, June 2 2007 FP6 2004 Infrastructures 6-SSA-026634

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

FREE SCIENTIFIC COMPUTING

FREE SCIENTIFIC COMPUTING Institute of Physics, Belgrade Scientific Computing Laboratory FREE SCIENTIFIC COMPUTING GRID COMPUTING Branimir Acković March 4, 2007 Petnica Science Center Overview 1/2 escience Brief History of UNIX

More information

08. February. Jonghu Lee Global Science Data Center/KISTI

08. February. Jonghu Lee Global Science Data Center/KISTI 08. February Jonghu Lee Global Science Data Center/KISTI 2 GRID of the Americas Workshop 1 2 3 4 5 Introduction to KISTI and GSDC GSDC Current Status ALICE Tier-1 Deployment Data Center Management Future

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

The Grid: Processing the Data from the World s Largest Scientific Machine

The Grid: Processing the Data from the World s Largest Scientific Machine The Grid: Processing the Data from the World s Largest Scientific Machine 10th Topical Seminar On Innovative Particle and Radiation Detectors Siena, 1-5 October 2006 Patricia Méndez Lorenzo (IT-PSS/ED),

More information

Cluster Setup and Distributed File System

Cluster Setup and Distributed File System Cluster Setup and Distributed File System R&D Storage for the R&D Storage Group People Involved Gaetano Capasso - INFN-Naples Domenico Del Prete INFN-Naples Diacono Domenico INFN-Bari Donvito Giacinto

More information

the world with light

the world with light the world with light What does GLIF do? GLIF stands for Global Lambda Integrated Facility. GLIF promotes the paradigm of lambda networking to support demanding scientifi c applications. GLIF makes lambdas

More information

Distributed Monte Carlo Production for

Distributed Monte Carlo Production for Distributed Monte Carlo Production for Joel Snow Langston University DOE Review March 2011 Outline Introduction FNAL SAM SAMGrid Interoperability with OSG and LCG Production System Production Results LUHEP

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Gergely Debreczeni (CERN IT/Grid Deployment Group) The data factory of LHC 40 million collisions in each second After on-line triggers and selections, only 100 3-4 MB/event requires

More information

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE

RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE RUSSIAN DATA INTENSIVE GRID (RDIG): CURRENT STATUS AND PERSPECTIVES TOWARD NATIONAL GRID INITIATIVE Viacheslav Ilyin Alexander Kryukov Vladimir Korenkov Yuri Ryabov Aleksey Soldatov (SINP, MSU), (SINP,

More information

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS

Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS Journal of Physics: Conference Series Large scale commissioning and operational experience with tier-2 to tier-2 data transfer links in CMS To cite this article: J Letts and N Magini 2011 J. Phys.: Conf.

More information

High Performance Computing from an EU perspective

High Performance Computing from an EU perspective High Performance Computing from an EU perspective DEISA PRACE Symposium 2010 Barcelona, 10 May 2010 Kostas Glinos European Commission - DG INFSO Head of Unit GÉANT & e-infrastructures 1 "The views expressed

More information

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan

NCP Computing Infrastructure & T2-PK-NCP Site Update. Saqib Haleem National Centre for Physics (NCP), Pakistan NCP Computing Infrastructure & T2-PK-NCP Site Update Saqib Haleem National Centre for Physics (NCP), Pakistan Outline NCP Overview Computing Infrastructure at NCP WLCG T2 Site status Network status and

More information

EGEE and Interoperation

EGEE and Interoperation EGEE and Interoperation Laurence Field CERN-IT-GD ISGC 2008 www.eu-egee.org EGEE and glite are registered trademarks Overview The grid problem definition GLite and EGEE The interoperability problem The

More information

Clouds in High Energy Physics

Clouds in High Energy Physics Clouds in High Energy Physics Randall Sobie University of Victoria Randall Sobie IPP/Victoria 1 Overview Clouds are integral part of our HEP computing infrastructure Primarily Infrastructure-as-a-Service

More information

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model

The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model Journal of Physics: Conference Series The evolving role of Tier2s in ATLAS with the new Computing and Data Distribution model To cite this article: S González de la Hoz 2012 J. Phys.: Conf. Ser. 396 032050

More information

CDNetworks. Global CDN Service Leader. Rochelle Hugh

CDNetworks. Global CDN Service Leader. Rochelle Hugh Global CDN Service Leader CDNetworks Rochelle Hugh (rhugh@cdnetworks.co.kr)) Handong Bldg 828-7, Yeoksam-Dong, Gangnam, Seoul 135-935 Tel: 82 2 3441-0400 / Fax: 82 2 565-8383 01 Company Overview Asia No1.

More information

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko

Considerations for a grid-based Physics Analysis Facility. Dietrich Liko Considerations for a grid-based Physics Analysis Facility Dietrich Liko Introduction Aim of our grid activities is to enable physicists to do their work Latest GANGA developments PANDA Tier-3 Taskforce

More information

ISTITUTO NAZIONALE DI FISICA NUCLEARE

ISTITUTO NAZIONALE DI FISICA NUCLEARE ISTITUTO NAZIONALE DI FISICA NUCLEARE Sezione di Perugia INFN/TC-05/10 July 4, 2005 DESIGN, IMPLEMENTATION AND CONFIGURATION OF A GRID SITE WITH A PRIVATE NETWORK ARCHITECTURE Leonello Servoli 1,2!, Mirko

More information

Tier2 Centre in Prague

Tier2 Centre in Prague Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the t Academy of Sciences of the Czech Republic Outline Supported groups Hardware Middleware and software Current status 2 Particle

More information

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries. for a distributed Tier1 in the Nordic countries. Philippe Gros Lund University, Div. of Experimental High Energy Physics, Box 118, 22100 Lund, Sweden philippe.gros@hep.lu.se Anders Rhod Gregersen NDGF

More information

Distributing storage of LHC data - in the nordic countries

Distributing storage of LHC data - in the nordic countries Distributing storage of LHC data - in the nordic countries Gerd Behrmann INTEGRATE ASG Lund, May 11th, 2016 Agenda WLCG: A world wide computing grid for the LHC NDGF: The Nordic Tier 1 dcache: Distributed

More information

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center

The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center The Software Defined Online Storage System at the GridKa WLCG Tier-1 Center CHEP 2018, Sofia Jan Erik Sundermann, Jolanta Bubeliene, Ludmilla Obholz, Andreas Petzold STEINBUCH CENTRE FOR COMPUTING (SCC)

More information

150 million sensors deliver data. 40 million times per second

150 million sensors deliver data. 40 million times per second CERN June 2007 View of the ATLAS detector (under construction) 150 million sensors deliver data 40 million times per second ATLAS distributed data management software, Don Quijote 2 (DQ2) ATLAS full trigger

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information

Site Report. Stephan Wiesand DESY -DV

Site Report. Stephan Wiesand DESY -DV Site Report Stephan Wiesand DESY -DV 2005-10-12 Where we're headed HERA (H1, HERMES, ZEUS) HASYLAB -> PETRA III PITZ VUV-FEL: first experiments, X-FEL: in planning stage ILC: R & D LQCD: parallel computing

More information

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore.

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore. Garuda : The National Grid Computing Initiative Of India Natraj A.C, CDAC Knowledge Park, Bangalore. natraj@cdacb.ernet.in 1 Agenda About CDAC Garuda grid highlights Garuda Foundation Phase EU-India grid

More information

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop

The Grid. Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop The Grid Processing the Data from the World s Largest Scientific Machine II Brazilian LHC Computing Workshop Patricia Méndez Lorenzo (IT-GS/EIS), CERN Abstract The world's largest scientific machine will

More information

The Canadian CyberSKA Project

The Canadian CyberSKA Project The Canadian CyberSKA Project A. G. Willis (on behalf of the CyberSKA Project Team) National Research Council of Canada Herzberg Institute of Astrophysics Dominion Radio Astrophysical Observatory May 24,

More information

Grid Operation at Tokyo Tier-2 Centre for ATLAS

Grid Operation at Tokyo Tier-2 Centre for ATLAS Grid Operation at Tokyo Tier-2 Centre for ATLAS Hiroyuki Matsunaga, Tadaaki Isobe, Tetsuro Mashimo, Hiroshi Sakamoto & Ikuo Ueda International Centre for Elementary Particle Physics, the University of

More information

Optical Networking Activities in NetherLight

Optical Networking Activities in NetherLight Optical Networking Activities in NetherLight TERENA Networking Conference 2003 Zagreb, May 19-22, 2003 Erik Radius Manager Network Services, SURFnet Outline NetherLight What is it Why: the rationale From

More information

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN

EMI Deployment Planning. C. Aiftimiei D. Dongiovanni INFN EMI Deployment Planning C. Aiftimiei D. Dongiovanni INFN Outline Migrating to EMI: WHY What's new: EMI Overview Products, Platforms, Repos, Dependencies, Support / Release Cycle Migrating to EMI: HOW Admin

More information

Abstract 1.1 GLORIAD. GLORIAD aims to better integrate, with its advanced network infrastructure, the research and education communities of the

Abstract 1.1 GLORIAD. GLORIAD aims to better integrate, with its advanced network infrastructure, the research and education communities of the GLORIAD: A Ring Around the Northern Hemisphere for Science and Education connecting North America, Russia, China, Korea and Netherlands with Advanced Network Services Greg Cole (Research Director, Joint

More information

HD Video Streaming Services & Network Requirements

HD Video Streaming Services & Network Requirements HD Video Streaming Services & Network Requirements JongWon Kim, Ph.D. HSN 2006 WS @ Daegu February 22 nd, 2006 Networked Media Laboratory Dept. of Information & Communications Gwangju Institute of Science

More information

The National Analysis DESY

The National Analysis DESY The National Analysis Facility @ DESY Yves Kemp for the NAF team DESY IT Hamburg & DV Zeuthen 10.9.2008 GridKA School NAF: National Analysis Facility Why a talk about an Analysis Facility at a Grid School?

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

GRID AND HPC SUPPORT FOR NATIONAL PARTICIPATION IN LARGE-SCALE COLLABORATIONS

GRID AND HPC SUPPORT FOR NATIONAL PARTICIPATION IN LARGE-SCALE COLLABORATIONS GRID AND HPC SUPPORT FOR NATIONAL PARTICIPATION IN LARGE-SCALE COLLABORATIONS M. Dulea, S. Constantinescu, M. Ciubancan, T. Ivanoaica, C. Placinta, I.T. Vasile, D. Ciobanu-Zabet Department of Computational

More information

DYNES: DYnamic NEtwork System

DYNES: DYnamic NEtwork System DYNES: DYnamic NEtwork System Artur Barczyk California Institute of Technology / US LHCNet TERENA e2e Workshop Prague, November 29 th, 2010 1 OUTLINE What is DYNES Status Deployment Plan 2 DYNES Overview

More information

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN)

A short introduction to the Worldwide LHC Computing Grid. Maarten Litmaath (CERN) A short introduction to the Worldwide LHC Computing Grid Maarten Litmaath (CERN) 10-15 PetaByte/year The LHC challenge Data analysis requires at least ~100k typical PC processor cores Scientists in tens

More information

& SDNenabled International Testbeds

& SDNenabled International Testbeds Date: 2014. 10. 8 Place: Osaka, Japan OF@TEIN & OF@KOREN SDNenabled International Testbeds APII Workshop 2014 Dr. Sun Park on behalf of OF@TEIN, OF@KOREN Team Networked Computing Systems Laboratory School

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

ATLAS COMPUTING AT OU

ATLAS COMPUTING AT OU ATLAS COMPUTING AT OU Outline HORST SEVERINI OU DOE REVIEW FEBRUARY 1, 2010 Introduction US ATLAS Grid Computing and Open Science Grid (OSG) US ATLAS Tier 2 Center OU Resources and Network Summary and

More information

Geant4 in a Distributed Computing Environment

Geant4 in a Distributed Computing Environment Geant4 in a Distributed Computing Environment S. Guatelli 1, P. Mendez Lorenzo 2, J. Moscicki 2, M.G. Pia 1 1. INFN Genova, Italy, 2. CERN, Geneva, Switzerland Geant4 2005 10 th user conference and collaboration

More information

Grid Interoperation and Regional Collaboration

Grid Interoperation and Regional Collaboration Grid Interoperation and Regional Collaboration Eric Yen ASGC Academia Sinica Taiwan 23 Jan. 2006 Dreams of Grid Computing Global collaboration across administrative domains by sharing of people, resources,

More information

Grid Computing: dealing with GB/s dataflows

Grid Computing: dealing with GB/s dataflows Grid Computing: dealing with GB/s dataflows Jan Just Keijser, Nikhef janjust@nikhef.nl David Groep, NIKHEF 21 March 2011 Graphics: Real Time Monitor, Gidon Moont, Imperial College London, see http://gridportal.hep.ph.ic.ac.uk/rtm/

More information