Inauguration Cartesius June 14, 2013

Similar documents
InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

Using Cartesius & Lisa

HECToR. UK National Supercomputing Service. Andy Turner & Chris Johnson

John Hengeveld Director of Marketing, HPC Evangelist

HPC Saudi Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences. Presented to: March 14, 2017

SARA Computing & Networking Services

HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA

The Future of Interconnect Technology

Prototyping in PRACE PRACE Energy to Solution prototype at LRZ

HPC Architectures past,present and emerging trends

The Center for High Performance Computing. Dell Breakfast Events 20 th June 2016 Happy Sithole

Practical Scientific Computing

Aim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group

Preparing GPU-Accelerated Applications for the Summit Supercomputer

Exascale: challenges and opportunities in a power constrained world

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

Workshop: Innovation Procurement in Horizon 2020 PCP Contractors wanted

Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy

SuperMUC. PetaScale HPC at the Leibniz Supercomputing Centre (LRZ) Top 500 Supercomputer (Juni 2012)

The Mont-Blanc approach towards Exascale

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist

Umeå University

Umeå University

HPC Technology Trends

HPC IN EUROPE. Organisation of public HPC resources

The Energy Challenge in HPC

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center

CALMIP : HIGH PERFORMANCE COMPUTING

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

European energy efficient supercomputer project

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

JÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich

HPC Resources & Training

Oak Ridge National Laboratory Computing and Computational Sciences

HETEROGENEOUS COMPUTE INFRASTRUCTURE FOR SINGAPORE

Atos announces the Bull sequana X1000 the first exascale-class supercomputer. Jakub Venc

A New NSF TeraGrid Resource for Data-Intensive Science

e-research Infrastructures for e-science Axel Berg SARA national HPC & e-science support center RAMIRI, June 15, 2011

The Road from Peta to ExaFlop

NCEP HPC Transition. 15 th ECMWF Workshop on the Use of HPC in Meteorology. Allan Darling. Deputy Director, NCEP Central Operations

The Future of High Performance Interconnects

Interconnect Your Future

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier-0)

Building the Most Efficient Machine Learning System

Advanced User Support with πcs

SARA Overview. Walter Lioen Group Leader Supercomputing & Senior Consultant

Green Supercomputing

Introduction to Parallel and Distributed Computing. Linh B. Ngo CPSC 3620

Shared Services Canada Environment and Climate Change Canada HPC Renewal Project

In-Network Computing. Paving the Road to Exascale. 5th Annual MVAPICH User Group (MUG) Meeting, August 2017

Supercomputing at the United States National Weather Service (NWS)

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

: A new version of Supercomputing or life after the end of the Moore s Law

Practical Scientific Computing

InfiniBand Strengthens Leadership as The High-Speed Interconnect Of Choice

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2017

ACCELERATED COMPUTING: THE PATH FORWARD. Jen-Hsun Huang, Co-Founder and CEO, NVIDIA SC15 Nov. 16, 2015

International Conference Russian Supercomputing Days. September 25-26, 2017, Moscow

Present and Future Leadership Computers at OLCF

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory

HPC Architectures past,present and emerging trends

Technologies for Information and Health

Update on LRZ Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities. 2 Oct 2018 Prof. Dr. Dieter Kranzlmüller

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber

Systems Architectures towards Exascale

NVIDIA Application Lab at Jülich

Interconnect Your Future

HPC & Quantum Technologies in Europe

Prototypes Systems for PRACE. François Robin, GENCI, WP7 leader

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Medical practice: diagnostics, treatment and surgery in supercomputing centers

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Experiences with GPGPUs at HLRS

Trends in HPC (hardware complexity and software challenges)

High Performance Computing in Europe and USA: A Comparison

Earth System Sciences in the Times of Brilliant Technologies

EU Research Infra Integration: a vision from the BSC. Josep M. Martorell, PhD Associate Director

C-DAC HPC Trends & Activities in India. Abhishek Das Scientist & Team Leader HPC Solutions Group C-DAC Ministry of Communications & IT Govt of India

Innovating the Future of Flight. HPCXXL Conference April 2019

Results from TSUBAME3.0 A 47 AI- PFLOPS System for HPC & AI Convergence

Execution Models for the Exascale Era

High Performance Computing

Barcelona Supercomputing Center

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Jülich Supercomputing Centre

TOP500 List s Twice-Yearly Snapshots of World s Fastest Supercomputers Develop Into Big Picture of Changing Technology

Výpočetní zdroje IT4Innovations a PRACE pro využití ve vědě a výzkumu

CYFRONET SITE REPORT IMPROVING SLURM USABILITY AND MONITORING. M. Pawlik, J. Budzowski, L. Flis, P. Lasoń, M. Magryś

HPC Capabilities at Research Intensive Universities

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

Mapping MPI+X Applications to Multi-GPU Architectures

I/O at the Center for Information Services and High Performance Computing

High Performance Computing from an EU perspective

Building the Most Efficient Machine Learning System

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Update on Cray Activities in the Earth Sciences

It s a Multicore World. John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist

Building supercomputers from embedded technologies

Paving the Road to Exascale

Transcription:

Inauguration Cartesius June 14, 2013 Hardware is Easy...but what about software/applications/implementation/? Dr. Peter Michielse Deputy Director 1

Agenda History Cartesius Hardware path to exascale: the next frontier Back to the future - actual improvements What can we offer to you, here and now? 2

1983-2008: National Supercomputing Services SGI Altix 3700 (2003) 1400 CPUs, 3.2 TFlop/s IBM Power6 (2008) 3456 cores, 65 TFlop/s Cray C916/121024 (2003) 12 CPUs, 12 GFlop/s CDC Cyber 205 (1984) 1 CPU, 100 MFlop/s 3

New national supercomputer Cartesius Cartesius 2013 270 TFlop/s Cartesius end 2014 > 1 PFlop/s So we face a factor of 10M peak performance in about 30 years 4

High-level architecture Cartesius InfiniBand FDR14 Low-Latency Network Fat Node Island 32 nodes 1,024 cores 256 GB/node Multiple Multiple Thin Multiple Thin Node Multiple Thin Node Islands Thin Node Islands 4k Node Islands 32 8k Islands 32 nodes cores 64 32 nodes 1.024 GB/node nodes 1.024 cores 1.024 cores 2 cores 2 GB/core 2 GB/core GB/core 2 Interactive nodes 16 cores 128 GB/node Multiple Service & Management nodes 180 TB home file system > 5 PB Scratch & Project Lustre file systems 5

Performance roadmap and plans Huygens Peak Perf.: 65 TF Av. Application Capacity: 1.0 Cartesius Phase 1 Peak Perf.: 270 TF Application Capacity: 3.4-13.0 Cartesius Phase 2 Peak Perf.: > 1PF Application Capacity: 11.8-48.3 Sept. 2008 - June 2013 July 2013 2H 2014 2H 2014 On-demand plans: Performance growth relative to user demand and scientific challenges Accelerator options to be investigated with user applications will start as soon as possible accelerator platform operational before year end 6

The next (or final?) frontier Copyright: Paramount Pictures. 7

The hardware road to the future Table from Rick Stevens, Argonne National Laboratory, USA Hardware: challenging, but considered as doable. 8

Only rely on hardware? Although Moore s law seems to continue, individual CPU cores do not get faster: Huygens Power6 core at 18.8 Gflop/s (2008) peak performance Currently installed cores in Cartesius at around the same peak performance (double flops/cycle, half frequency) Scaling to millions of cores is not straightforward Implementation issues Algorithmic design New hardware Accelerators, GPUs, many-core, FPGA, personalities, Reliability and resilience How to deal with to-be-expected frequent hardware failures? 9

June 13, 2007: Huygens part I At the 2007 inauguration of Huygens part I, we said: Until 2032: Calculatio fortunae may hold for hardware development (classical, FPGA, GPU, quantum,...) But has to be assisted (or even more than that) by software development and intelligent implementation Climate research, turbulence modelling: Getting to 1 M application performance improvement in 25 years will probably require 1 k coming from software and implementation... Let s see how far we got in 6 years 10

Improvements over 6 years Prof. Henk Dijkstra (UU), climate research: Implementation improvement: 4x over past 6 years, no algorithmic improvement Next challenge: coupled ocean/atmosphere, ocean (0.1 degree, 45 levels), atmosphere (0.5 degree, 45 levels), grid 5*10 8 + 10 8, 100 years 11

Improvements over 6 years Prof. Arthur Veldman and dr. Roel Verstappen (both RUG), turbulence modeling: Algorithm/implementation improvement: 100x over past 18 years (average 4.65x in 6 years) Next challenge: DNS on fast car (2015), Re = 10 7, grid 10 13, flops 10 22 12

EC vision on HPC 15 February 2012: HPC Europe s place in a global race. Some conclusions and related recommendations: Renew the HPC strategy similar to the decision of setting up the European Space Agency (ESA) in 1975; Implement Research and Development projects concerning exascale roadmaps, including both hardware and software issues; Europe has an outstanding position in scalable codes and expertise. Investing in exascale software development in these fields contributes to keeping competitiveness in areas of significant importance for science and industry. But: Member States have to take care of their own national (Tier-1) systems! 13

What does that mean for you? Let s indeed assume that Hardware is Easy Question may be answered by ASML, Bull and Intel this afternoon Remember that the user is on top of the food chain Focus on: Algorithms Implementation Scaling Curriculum Scientific Computing 14

What does that mean for you? Expertise is available for Cartesius and Lisa: Helpdesk Parallelisation and optimisation of codes Implementation improvements Scaling DCCP projects Wim Nieuwpoort Award Parallel I/O techniques Visualisation and rendering Training (also customised) 3 rd party codes PRACE and DECI access requests And let s not forget: also for grid, HPC cloud, Hadoop, Beehub, 15

From SURF SciencePark 16