Grid and Cloud Activities in KISTI March 23, 2011 Soonwook Hwang KISTI, KOREA 1
Outline Grid Operation and Infrastructure KISTI ALICE Tier2 Center FKPPL VO: Production Grid Infrastructure Global Science Data Center Partnership & Leadership for Nationwide Superc omputing Infrastructure Development of AMGA within EMI
KISTI ALICE Tier2 Center
KISTI ALICE Tier-2 Center Signing of WLCG MoU for the ALI CE Tier-2 center ( 07.10.23) Has been part of ALICE distribut ed computing grid as an official T 2 Providing a reliable and stable no de in the ALICE Grid Funded by MEST ~200,000 US dollars/year
KISTI s contribution to ALICE computing 1.21 % 1.2 % ~1.2% contribution to ALICE computing in the total job execution 120 CPU cores and 50 TB storage dedicated to the ALICE experiment Processing near 8000 jobs per month in average
Accounting and Availability of KISTI
FKPPL VO: Production Grid Inf rastructure
FKPPL VO Grid Based on glite Objective Foster the adoption of grid technology and provide researchers in Kor ea and France with production Grid Infrastructure Operation Has been up and running since October 2008, providing about 10,000 CPU cores and 30 TBytes of Disk Storage last December, KEK has joined FKPPL VO, contributing ~1,600 CPU cores and 27 Tbytes of Disks Under discussion of moving towards the construction of France-Asia VO As of now, ~70 users have joined the FKPPL VO membership LFC IN2P3 CE SE CE WMS CE SE VOMS WMS UI WIKI KISTI SE
Application porting Support on FKPPL VO Deployment of Geant4 applications Used extensively by the National Cancel Center in Korea to carry out compute-intensive simulations relevant to cancer treatment pla nning In collaboration with Dr. Jungwook Shin and Se Byeong Lee of Nat ional Cancer Center in Korea Deployment of two-color QCD (Quantum ChromoDyna mics) simulations in theoretical Physics Several hundreds or thousands of QCD jobs are required to be run on the Grid, with each jobs taking about 10 days. In collaboration with Prof. Seyong Kim of Sejong University
Distribution of Nomalized CPU Time (hepspec06) Grouped by VO (Sep. 2010 Nov. 2010) VO Total 1 atlas 712,986,044 2 cms 228,199,324 3 alice 204,675,996 4 lhcb 79,816,476 5 the ophys 54,402,212 6 dzero 17,933,284 7 compchem 14,215,152 8 ilc 14,167,236 9 vo.cta.in2p3.fr 6,971,240 10 biomed 6,684,528 11 superbvo.org 5,375,896 12 auger 5,358,432 13 hone 5,136,952 14 pheno 4,866,264 15 icecube 4,363,112 16 fkppl.kis ti.re.kr 4,150,920 17 see 4,086,732 18 cosmo 3,862,432 19 vo.lal.in2p3.fr 3,095,064 20 enmr.eu 2,799,256 VO Non LHC Total 1 the ophys 54,402,212 2 dzero 17,933,284 3 compchem 14,215,152 4 ilc 14,167,236 5 vo.cta.in2p3.fr 6,971,240 6 biomed 6,684,528 7 superbvo.org 5,375,896 8 auger 5,358,432 9 hone 5,136,952 10 pheno 4,866,264 11 icecube 4,363,112 12 fkppl.kis ti.re.kr 4,150,920 13 see 4,086,732 14 cosmo 3,862,432 15 vo.lal.in2p3.fr 3,095,064 16 enmr.eu 2,799,256 Total 1,420,148,568 Total (all VOs) 1,420,148,568
Grid Training (1/2) In February, we organized Geant4 and Grid tutorial 2010 f or Korean medical physics community About 34 participants from major hospitals in Korea About 20 new users joined the FKPPL VO membership
Grid Training (2/2) u 2010 Summer Training Course on Geant4, GATE and Gri d computing held in Seoul in July About 50 participants from about 20 institutes in Korea
Global Science Experimental D ata Hub Center
KISTI GSDC Center Korean Gov. support " Computing and Storage Infrastructure " Technology Development " Apply Grid Technology to legacy app. RAW Data Tier-1 ALICE Tier-1 prototype ALICE Tier-2 KiAF (KISTI Analysis Farm) RAW Data Supporting Data Centric Research Communities & Promotion of Research Collaboration
Roadmap 2PB/2,000CPU 5PB/5,000CPU (Alice/CERN) (CDF/FNAL) (STAR /BNL) (Belle /KEK) (LIGO /LLO) Phase 2 Phase 1 (2009~2011) Provide Global Science Data Analysis E nvironment: HEP, Astronomy, etc. National Data Center Asian-Pacific Hub Center (2012~2014) Expand Supporting Fields: Earth Environment, Biometrics, Nano-tech, etc. Global Computing Resources Assigning and I nformation System Cyber Research and Training Environment
Partnership and Leadership for Na tionwide Supercomputing Infrastr ucture (PLSI)
PLSI Consortium of 14 HPC Computing Centers in Korea Distributed HPC computing environments for world- class computational science research Period : 2007 ~ Budget: ~2M$ / year Goal: 400 Tflops, 14 HPC Centers around Korea Current status: ü Have Established ~80 TFlops computing capacity by combining 15 computing resources at 10 partner sites over dedicated high-performance networks
Distributed HPC Infrastructure [Figure 1] PLSI unified computing service infrastructure
PLSI Portal PLSI User Portal PLSI Resource Allocation/ Job submission and management User Accounting Information/ SSH & SFTP Terminal PLSI MGrid Portal Application Portal targeting to molecular dynamics simulation
Development of AMGA
AMGA An official glite middleware component for a metadata catalogue service AMGA provides: Access to metadata for files distributed on the grid KISTI has taken over the leadership of AMGA development since the July of 20 09 AMGA 2.0 supporting the OGF WS-DAIR was successfully released in October in 2009 in c ollaboration with CERN and INFN AMGA 2.1 released in April 2010 supports Data Federation of Metadata and AMGA GUI cli ents was developed as part of the AMGA 2.1 release. KISTI is one of the product teams of EMI by contributing to the evolution and m aintenance of AMGA Drug Discovery High Energy Physics High Energy Physics Digital Library Climate Research
Participation to EMI with AMGA 2
Summary Likewise in EGEE, KISTI is an official partner of Europe an Grid Projects, EGI and EMI With its continuing contribution to production-quality Grid operation and the AMGA development, respectively We are in the middle of moving one step further toward s the setting up of France-Asia VO starting from FKPPL VO With the KISTI GSDC, KISTI is expected to play a role o f a Tier-1 data center as well as its traditional supercom puter center
Thank you for your attention!