Výpočetní zdroje IT4Innovations a PRACE pro využití ve vědě a výzkumu

Similar documents
HPC IN EUROPE. Organisation of public HPC resources

The PRACE Project and using the high performance computing for large Monte Carlo simulations. Krassimir Georgiev, IICT-BAS and NCSA, Bulgaria

HPC Resources & Training

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

JÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich

Jülich Supercomputing Centre

HPC SERVICE PROVISION FOR THE UK

Interconnection of Armenian e- Infrastructures with the pan- Euroepan Integrated Environments

IS-ENES2 Kick-off meeting Sergi Girona, Chair of the Board of Directors

High Performance Computing from an EU perspective

High-Performance Computing Europe s place in a Global Race

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

1 The Partnership for Advanced Computing in Europe PRACE. Infrastructure PRACE resources and opportunities

CINECA and the European HPC Ecosystem

High Performance Computing : Code_Saturne in the PRACE project

The EuroHPC strategic initiative

Prototypes Systems for PRACE. François Robin, GENCI, WP7 leader

PROJECT FINAL REPORT. Tel: Fax:

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

SEVENTH FRAMEWORK PROGRAMME Research Infrastructures

Workshop: Innovation Procurement in Horizon 2020 PCP Contractors wanted

SuperMUC. PetaScale HPC at the Leibniz Supercomputing Centre (LRZ) Top 500 Supercomputer (Juni 2012)

DEISA. An European GRID-empowered infrastructure for Science and Industry" Pisa 11 maggio 2005 Angelo De Florio

Prototyping in PRACE PRACE Energy to Solution prototype at LRZ

EUREKA European Network in international R&D Cooperation

Service withdrawal: Selected IBM ServicePac offerings

PLAN-E Workshop Switzerland. Welcome! September 8, 2016

FORSCHUNGSZENTRUM JÜLICH GmbH Zentralinstitut für Angewandte Mathematik D Jülich, Tel. (02461)

EU Research Infra Integration: a vision from the BSC. Josep M. Martorell, PhD Associate Director

Federated Services and Data Management in PRACE

SEVENTH FRAMEWORK PROGRAMME Research Infrastructures

Moving e-infrastructure into a new era the FP7 challenge

EuroHPC Bologna 23 Marzo Gabriella Scipione

Anselm Cluster at IT4Innovations. David Hrbáč

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

Technologies for Information and Health

Creating a European Supercomputing Infrastructure Thomas Lippert, Forschungszentrum Jülich, Germany

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

SPANISH SUPERCOMPUTING NETWORK

SEVENTH FRAMEWORK PROGRAMME Research Infrastructures

Supercomputing resources in BSC-CNS, RES & PRACE. Sergi Girona BSC Operations Director & PRACE Director

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier-0)

Recent Developments in Supercomputing

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

Creating a European Supercomputing Infrastructure Thomas Lippert, Forschungszentrum Jülich, Germany

High Performance Computing at the Jülich Supercomputing Center

HECToR. UK National Supercomputing Service. Andy Turner & Chris Johnson

European Cybersecurity cppp and ECSO. org.eu

Comparison of PRACE prototypes and benchmarks. Axel Berg (SARA, NL), ISC 10 Hamburg June 1 st 2010

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

First European Globus Community Forum Meeting

NVIDIA Application Lab at Jülich

D6.1 AllScale Computing Infrastructure

Research e-infrastructures in Czech Republic (e-infra CZ) for scientific computations, collaborative research & research support

FEDERICA Federated E-infrastructure Dedicated to European Researchers Innovating in Computing network Architectures

Preparing for High-Luminosity LHC. Bob Jones CERN Bob.Jones <at> cern.ch

Petaflop Computing in the European HPC Ecosystem

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

PRACE prototypes. ICT 08, Lyon, Nov. 26, 2008 Dr. J.Ph. Nominé, CEA/DIF

INITIATIVE FOR GLOBUS IN EUROPE. Dr. Helmut Heller Leibniz Supercomputing Centre (LRZ) Munich, Germany IGE Project Coordinator

Users and utilization of CERIT-SC infrastructure

Large-scale Ultrasound Simulations Using the Hybrid OpenMP/MPI Decomposition

e-sens Electronic Simple European Networked Services

General Overview & Annex 1: Global Smart Grid Inventory

Thread and Data parallelism in CPUs - will GPUs become obsolete?

Network Virtualization for Future Internet Research

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber

EuroHPC: the European HPC Strategy

Current situation of policy and projects in Norway

IBM offers Software Maintenance for additional Licensed Program Products

HPC Capabilities at Research Intensive Universities

Preparing GPU-Accelerated Applications for the Summit Supercomputer

Storage and I/O requirements of the LHC experiments

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE

Euro-IX update. EIX WG Ripe 53 Amsterdam. Serge Radovcic. Euro-IX update. EIX WG RIPE53 Amsterdam. Oct 5th 2006

AMRES Combining national, regional and & EU efforts

BoR (11) 08. BEREC Report on Alternative Voice and SMS Retail Roaming Tariffs and Retail Data Roaming Tariffs

SUSE. High Performance Computing. Kai Dupke. Meike Chabowski. Senior Product Manager SUSE Linux Enterprise

The Energy Challenge in HPC

ESFRI Strategic Roadmap & RI Long-term sustainability an EC overview

When Worlds Collide Payments meet mobile Osman Inegol, Mobile Business Unit

CRAY XK6 REDEFINING SUPERCOMPUTING. - Sanjana Rakhecha - Nishad Nerurkar

Federated E-infrastructure Dedicated to European Researchers Innovating in Computing network Architectures

On the Path towards Exascale Computing in the Czech Republic and in Europe

Interconnect Your Future

BoR (10) 13. BEREC report on Alternative Retail Voice and SMS Roaming Tariffs and Retail Data Roaming Tariffs

IATF Stakeholder Conference

Experiences with GPGPUs at HLRS

Some aspect of research and development in ICT in Bulgaria. Authors Kiril Boyanov and Stefan Dodunekov

I/O Monitoring at JSC, SIONlib & Resiliency

NATO Science for Peace and Security (SPS) Programme. Emerging Security Challenges Division NATO

CYBERTECH MIDWEST Indianapolis, Indiana

Batch Services at CERN: Status and Future Evolution

THE GAS INTERCONNECTION POLAND LITHUANIA (GIPL)

Inauguration Cartesius June 14, 2013

OnAudience.com I Report 2017 Ad blocking in the Internet

Slurm at CEA. status and evolutions. 13 septembre 2013 CEA 10 AVRIL 2012 PAGE 1. SLURM User Group - September 2013 F. Belot, F. Diakhaté, M.

MAHA. - Supercomputing System for Bioinformatics

update: HPC, Data Center Infrastructure and Machine Learning HPC User Forum, February 28, 2017 Stuttgart

Transcription:

Výpočetní zdroje IT4Innovations a PRACE pro využití ve vědě a výzkumu Filip Staněk Seminář gridového počítání 2011, MetaCentrum, Brno, 7. 11. 2011

Introduction I Project objectives: to establish a centre focused on research in the field of IT with emphasis on development of HPC 3 basic project levels: 1.Resources management creating a functioning research centre 2.Research programmes implementation 3. Acquisition of research infrastructure particularly supercomputing facilities

Introduction II Implementation period for the grant-covered part of the project: July 2011 to December 2015 EU+MEYS grant: over 75 million EUR Principal stages: 7/11 start of implementation 9/11 start of research programmes III.Q/12 acquisition of small cluster container solution I.Q/13 small cluster ready for utilization by external academic institutions 2014 installation of big cluster in the new building 2015 full operation of IT4I centre

Research areas and programmes Information Technologies for People - IT4People RP1 IT for Disaster and Traffic Management VSB - TUO RP7 Multimedia Information Recognition and Presentation BUT Supercomputing for Industry - SC4Industry RP2 Numerical Modelling for Engineering VSB-TUO and IG AS RP3 Libraries for Parallel Computing VSB - TUO RP4 Modelling for Nanotechnologies VSB-TUO Theory for Information Technologies - Theory4IT RP5 IT for Knowledge Management VSB - TUO RP6 Soft Computing Methods with Supercomputer Applications UO RP8 Secure and Safe Architectures, Networks, and Protocols BUT All the partners - VSB-TUO (Technical University of Ostrava), BUT (Brno University of Technology), OU (University of Ostrava), SU (Silesian University) and IG AS (Institute of Geonics) cooperates within these programmes.

IT infrastructure: HW 2012 small cluster X86 cluster, 4096 cores, 4GB RAM / core, Infiniband QDR (non-blocking topology) GPU accelerated nodes storage capacity ca. 1 PB storage (5/20/75 % cap. SAS/SATA/tape HSM) system ALADIN arch. x86/power, 26 TFLOP/S, up to 5 PB storage (5/20/75 %), most likely SMP/NUMA

Resources management = Business model 50% internal access for own research programmes 30% open access for external institutions access based on grant competition 20% dedicated access for projects of national importance

R&D Partnership R&D Partnership Resources access External organizations SC4Industry: RP2,RP3,RP4 Theory4IT: RP5,RP6,RP8 IT4People: RP1, RP7 Super computer Open access

External users access scheme

External open access Types of allocations Small ( up to 5.000 bu per month + 10GB disk storage) Testing purposes, evaluation 4x per year, no peer review Medium ( up to 50.000 bu per month + 50GB disk storage) Advanced projects for parallel computing, 4x per year, no peer review, possible sharing of operational cost Large ( up to 500.000 bu per month + individual disk storage requirements) Large projects for parallel computing, 2x per year, peer review, possible sharing of operational cost Grand Challenge ( more than 500.000 bu per month) Possible use of PRACE resources) 1 bu = 1 Billing Unit = 1 core hour

Dedicated access scheme

PRACE The Partnership for Advance Computing in Europe is the European HPC Research Infrastructure PRACE enables world-class science through large scale simulations PRACE provides HPC services on leading edge capability systems on a diverse set of architectures PRACE operates up to six Tier-0 systems as a single entity including user and application support PRACE offers its resources through a single pan-european peer review process PRACE is providing services since August 2010 The first Tier-0 system is the fastest supercomputer in Europe

The European HPC Ecosystem PRACE prepares the creation of a persistent pan-european HPC service, consisting of several tier-0 centres providing European researchers with access to capability computers and forming the top level of the European HPC ecosystem. Since 2010 Czech Republic member of PRACE IT4Innovations Tier-1 system in European HPC Ecosystem

Current status 21 European countries are currently part of PRACE 4 hosting partners The PRACE first implementation phase started on July 1. 2010 The PRACE second implementation phase started on September 1. 2011

PRACE RI Tier-0 Systems The first production system, a 1 Petaflop/s IBM BlueGene/P (JUGENE) at GCS (Gauss Centre for Supercomputing) partner FZJ (Forschungszentrum Jülich) is available for European scientists. The second production system called CURIE at CEA-GENCI by Bull has been available for European scientists since January 2011. It will reach its full capacity of 1.6 Petaflop/s by late 2011. The third production system, a 1 Petaflop/s Cray (HERMIT) at GCS partner HLRS (High Performance Computing Center Stuttgart) will be available for European scientists by the end of 2011 with an upgrade to 4-5 Petaflop/s in 2013. The fourth production system, a 3 Petaflop/s IBM (SuperMUC) at GCS partner LRZ (Leibniz-Rechenzentrum) will be available for European scientists starting in mid 2012. Italy (CINECA) has announced the deployment of its Tier-0 systems for 2012 (FERMI), and Spain will follow in 2013.

PRACE RI Tier-1 Systems IBM Blue Gene/P Cray XT and XE IBM Power 6 range of large clusters including GPU resources made available from Finland, France, Germany, Ireland, Italy, Poland, Sweden, The Netherlands, Serbia, Switzerland, Turkey, and United Kingdom. Czech republic will join in 2012.

Accessing the PRACE RI Access Model Proposals are evaluated in a single European peer review process governed by the PRACE Scientific Steering Committee. Three types of resource allocations Test / evaluation Preparatory access Project access for a specific project, grant period ~ 1 year Programme access resources managed by a community Free-of-charge Funding Mainly national funding through partner countries European contribution Access model has to respect national interests (ROI)

Project access calls for proposals Access to PRACE resources is open to researchers from European academic institutions and industry The first combined call for Tier-0 and Tier-1 resources was open The main target of the PRACE-2IP project is to merge Tier- 1 services into the PRACE Research Infrastructure; these were previously provided by the DEISA projects (Distributed European Infrastructure for Supercomputing Applications). Find out more at http://www.prace-ri.eu/hpc-access

DECI calls for TIER-1 DECI enables European researchers to obtain access to the most powerful national (Tier-1) computing resources in Europe, regardless of their country of origin or work and to enhance the impact of European science and technology at the highest level. Proposals must deal with complex, demanding, innovative simulations that would not be possible without Tier-1 access. Please note that in addition to offering access to computing resources, applications-enabling assistance from experts at the leading European HPC centres is offered to enable projects to be run on the most appropriate Tier 1 platforms in PRACE.

DECI call Proposal Form General information Names Contacts Descriptions summary Requests Computer resources (CPU hours) Architecture (BlueGene, Cray, X86 Cluster, GPU) Software (GRID middleware, parallel I/O libs) Data (amount, long term storage needs, sources/targets) Code status (bottlenecks, parallelization approach) Confidentiality/dissemination informations

Current DECI-8 call Opening date: 2nd November 2011 Closing date: 10th January 2012, 1600 CEST Start date: 1st May 2012 Allocation period: 1 year Type of access: Project access Find out more at: http://www.prace-ri.eu/prace-project-access-4th-call-for

TIER-1 systems in DECI-8 I Cray XT4/5/6 and Cray XE6 four large Cray XE and XT systems are available at EPCC (UK), KTH (Sweden), CSC (Finland), and CSCS (Switzerland). The largest of these machines has a peak performance of 829 TeraFlops and a total of 90,112 cores. 50 million compute core hours are available on this architecture (normalized to Power 4+). IBM Blue Gene/P two BG/P systems are available at IDRIS (France) and RZG (Germany). The largest of these machines has a peak performance of 139 TeraFlops and a total of 40,960 cores. 3 million compute core hours are available on this architecture (normalized to Power 4+). IBM Power 6 three IBM Power 6 systems are available at RZG (Germany), SARA (Netherlands), and CINECA (Italy). The largest of these machines has a peak performance of 8 TeraFlops. 10 million compute core hours are available on this architecture (normalized to Power 4+).

TIER-1 systems in DECI-8 II Clusters clusters are available at FZJ (Germany, Bull Nehalem cluster), LRZ (Germany, Xeon cluster), HLRS (Germany, NEC Nehalem cluster plus GP/GPU cluster), CINES (France, SGI EX8200), CINECA (Italy, Westmere cluster plus GPGPU), PSNC (Poland, AMD plus GPGPU cluster), WCNS (Poland, HP cluster), ICHEC (Ireland, SGI EX8200), IPB (Serbia, AMD plus GPGPU cluster), and UYBHM (Turkey, Nehalem cluster). The largest cluster has a peak performance of 267 TeraFlops and a total of 23,040 cores. 38 million compute core hours are available on this architecture (normalized to Power 4+).

DECI-9 call Start date: 1st November 2012 Allocation period: 1 year Type of access: Project access First DECI call with Czech republic TIER-1 system participation.

Past project access calls for proposals PRACE Early Access call May 10 June 10, 2010 10 proposals were granted a 332 million core hours PRACE Project Access - 1st call for proposals June 15 August 15, 2010 9 proposals requesting 362 million core hours were granted access PRACE Project Access 2nd Call for Proposals November 1, 2010 January 11, 2011 17 proposals requesting 397.8 million core hours were granted access PRACE Project Access 3rd Call for Proposals May 2, 2011 November 1, 2011 PRACE Project Access 4th Call for Proposals November 2, 2011 May 1, 2012

Preparatory access calls for proposals Continuously open with cut-off every 3 months 3 types of preparatory access Type A Code scalability testing (max 100.000 core hours JUGENE, 50.000 core hours CURIE) Type B - Code development and optimization by the applicant (max 250.000 core hours JUGENE, 200.000 core hours CURIE, max 2 months, max 6 months) Type C - Code development with support from experts (max 250.000 core hours JUGENE, 200.000 core hours CURIE, max 6 months)

Thank you for your attention! www.it4i.cz www.it4i.eu www.it4innovations.cz www.it4innovations.eu facebook.com/it4innovations Kick-off meeting November 29th, 2011