The Earth Simulator Current Status

Similar documents
The Architecture and the Application Performance of the Earth Simulator

JAMSTEC "Cyber System Current Status

Experiences of the Development of the Supercomputers

Brand-New Vector Supercomputer

Fujitsu s Approach to Application Centric Petascale Computing

MAHA. - Supercomputing System for Bioinformatics

Current Status of the Next- Generation Supercomputer in Japan. YOKOKAWA, Mitsuo Next-Generation Supercomputer R&D Center RIKEN

The Earth Simulator System

HPC and Big Data: Updates about China. Haohuan FU August 29 th, 2017

Practical Scientific Computing

FUJITSU HPC and the Development of the Post-K Supercomputer

Marine Institute Job Description

An Overview of Fujitsu s Lustre Based File System

Introduction to the K computer

Performance Evaluation of a Vector Supercomputer SX-Aurora TSUBASA

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Current Progress of Grid Project in KMA

JAMSTEC Ocean Data Systems

Fujitsu Petascale Supercomputer PRIMEHPC FX10. 4x2 racks (768 compute nodes) configuration. Copyright 2011 FUJITSU LIMITED

Computer Comparisons Using HPCC. Nathan Wichmann Benchmark Engineer

The Global Seismographic Network and recording of the Earthquake of December 26, 2004

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo

Japan HPC Programs - The Japanese national project of the K computer -

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial

Update of Post-K Development Yutaka Ishikawa RIKEN AICS

Cluster Network Products

Supercomputer SX-9 Development Concept

Introduction of Oakforest-PACS

United Nations Conference on the Human Environment (Stockholm)

Inauguration Cartesius June 14, 2013

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Basic Specification of Oakforest-PACS

Results from TSUBAME3.0 A 47 AI- PFLOPS System for HPC & AI Convergence

Practical Scientific Computing

Storage Supporting DOE Science

Tokyo. Copyright 2013 FUJITSU LIMITED

An Oracle White Paper April 2010

Strategy for the Development of Samoa

Information Technology Engineers Examination. Database Specialist Examination. (Level 4) Syllabus. Details of Knowledge and Skills Required for

NVIDIA Update and Directions on GPU Acceleration for Earth System Models

Toward An Integrated Cluster File System

N-Wave Overview. Westnet Boulder, CO 5 June 2012

Hitachi Completes Transfer of Hard Disk Drive Business to Western Digital

Introduction to Parallel and Distributed Computing. Linh B. Ngo CPSC 3620

System upgrade and future perspective for the operation of Tokyo Tier2 center. T. Nakamura, T. Mashimo, N. Matsui, H. Sakamoto and I.

UPU UNIVERSAL POSTAL UNION. CA C 4 SDPG AHG DRM Doc 3. Original: English COUNCIL OF ADMINISTRATION. Committee 4 Development Cooperation

GPU Developments in Atmospheric Sciences

Fujitsu s Technologies to the K Computer

Progress Report of NTT DOCOMO LTE Trials January 26, 2009 Takehiro Nakamura NTT DOCOMO

Japan s post K Computer Yutaka Ishikawa Project Leader RIKEN AICS

ITU FRAMEWORK FOR COOPERATION IN EMERGENCIES

NEC SX Series and Its Applications to Weather and Climate Modeling

ICT for Disaster Management including Movable and Deployable ICT Resource Unit (MDRU)

Progress in Disaster Risk Reduction through Multi-National Cooperation in Asia

Regulation to deploy 5G in Portugal

Outline. March 5, 2012 CIRMMT - McGill University 2

NetApp: Solving I/O Challenges. Jeff Baxter February 2013

Earth Observation Imperative

5G Views of NTT DOCOMO

IBM Spectrum Scale IO performance

From Hyogo to Sendai. Anoja Seneviratne Disaster Management Centre

ITSS Model Curriculum. - To get level 3 -

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo

Hitachi Announces 2018 Mid-term Management Plan

INSPUR and HPC Innovation. Dong Qi (Forrest) Oversea PM

Business Continuity. Policies. Promotion Framework

Organizational Update: December 2015

EN2910A: Advanced Computer Architecture Topic 06: Supercomputers & Data Centers Prof. Sherief Reda School of Engineering Brown University

PROMOTION OF HIGHER EFFICIENCY ROOM AIR CONDITIONERS IN THAILAND: NATIONAL POLICY ROADMAP. Endorsed by

A High-Performance Platform for Real-Time Data Processing and Extreme-Scale Heterogeneous Data Management

Introduction to Global Inter-Cloud Technology Forum (GICTF) and its Roadmaps

Submarine Cable Opportunity and Challenges in the Era of 5G

Recovery and Reconstruction. towards disaster resilient communities - from lessons learnt in Japan - 24 August 2004.

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo

EPOS a long term integration plan of research infrastructures for solid Earth Science in Europe

Inter-cloud computing: Use cases and requirements lessons learned 3.11

CISL Update. 29 April Operations and Services Division

Making a Case for a Green500 List

HPC Enabling R&D at Philip Morris International

Japan s Initiative for Eco-Innovation

Performance Analysis and Prediction for distributed homogeneous Clusters

The Center for High Performance Computing. Dell Breakfast Events 20 th June 2016 Happy Sithole

Key Technologies for 100 PFLOPS. Copyright 2014 FUJITSU LIMITED

Post-K Development and Introducing DLU. Copyright 2017 FUJITSU LIMITED

All-Flash Storage System

Stockholm Brain Institute Blue Gene/L

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide

DDN s Vision for the Future of Lustre LUG2015 Robert Triendl

C-DAC HPC Trends & Activities in India. Abhishek Das Scientist & Team Leader HPC Solutions Group C-DAC Ministry of Communications & IT Govt of India

ECMWF's Next Generation IO for the IFS Model and Product Generation

VIRGINIA ASSOCIATION OF COUNTIES NOVEMBER 14, 2011

The Spider Center-Wide File System

Design and Evaluation of a 2048 Core Cluster System

DVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011

High Performance Computing. What is it used for and why?

Italy - Information Day: 2012 FP7 Space WP and 5th Call. Peter Breger Space Research and Development Unit

HSC Data Processing Environment ~Upgrading plan of open-use computing facility for HSC data

IBM Emulex 16Gb Fibre Channel HBA Evaluation

Australia-Indonesia Facility for

High Performance Computing from an EU perspective

Transcription:

The Earth Simulator Current Status SC13. 2013 Ken ichi Itakura (Earth Simulator Center, JAMSTEC) http://www.jamstec.go.jp 2013 SC13 NEC BOOTH PRESENTATION 1

JAMSTEC Organization Japan Agency for Marine-Earth Science and Technology Research Institute for Global Change Institute for Research on Earth Evolution Institute of Biogeosciences Research Sector (700) Leading Project Laboratory System Research Support Department Kochi Institute for Core Sample Research Mutsu Institute for Oceanography Development and Promotion Sector (300) Marine Technology Center Data Research Center for Marine-Earth Sciences Earth Simulator Center Center for Deep Earth Exploration Advanced Research and Technology Promotion Department Management Sector (100) Planning Administration Dept. Dept. Safety and Environment Management Office Finance and Contracts Dept. Audit and Compliance Office 2013 SC13 NEC BOOTH PRESENTATION 2

Location of Earth Simulator Facilities Tokyo Yokohama Earth Simulator Site Yokosuka Headquarter 2013 SC13 NEC BOOTH PRESENTATION 3

Earth Simulator Earth Simulator March, 2002 ~ March.2009 (1/2 : ~ Sep.2008) Peak Performance : 40 T Flops Main Memory : 10 T Bytes Earth Simulator(ES2) March, 2009 ~ Peak Performance : 131 T Flops Main Memory : 20 T Bytes 2013 SC13 NEC BOOTH PRESENTATION 4

ES2 System outline Labs, User Terminals 10GbE (partially link aggregation 40GbE) User Network Processor nodes (PNs) Login server Storage server Usable Capacity 1.5PB RAID6 HDD SX-9/E 160 processor nodes (PNs) (including interactive 2nodes) Total peak performance 131TFLOPS Total main memory 20TB Data storage system 500TB Inter nodes network (IXS) 64GB/sec (bidirectional) /node Operation Network Maintenance Network 4Gbps Fiber Channel SAN (Storage Area Network) FC Switch FC Switch FC Switch NQS2 batch job system on PNs Agent request support Use statistical and resource information management Automatic power saving management Operation Servers 2013 SC13 NEC BOOTH PRESENTATION Maximum power consumption 3000kVA 5

TOP500 Rank ES 22nd ES2 94th 145th 2002 2002 2003 217 th in ember 2012 (18 th in Japan) Peak performance ratio is 93.38% (2nd in the world) 2003 2004 2004 2005 2005 2006 2006 2007 系列 1 1 1 1 1 1 3 4 7 10 14 20 30 49 73 22 31 37 54 68 94 145 217 2007 2008 2008 2009 2009 2010 2010 2011 217th 2011 2012 2012 2013 342nd 2013 472nd 2013 SC13 NEC BOOTH PRESENTATION 6

HPC Challenge Benchmark HPC Challenge Awards Class1 Best performance on a base or optimized run submitted to the HPC Challenge website. The benchmarks to be judged are: Global HPL, Global RandomAccess, EP STREAM (Triad) per system and Global FFT. http://www.hpcchallenge.org/ System TOP500(Linpack) #cores G FFT Effective Ratio #cores JAMSTEC ES2 (SX 9) Rank 217 122 TF 1,280 Rank 3 11.9 TF 9.1 % 1,280 RIKEN AICS (K Computer) Rank 3 10,510 TF 705,024 Rank 1 205.9 TF 1.94 % 663,552 Ratio x 86 x 550 x 17.3 x 4.7 x 519 2013 SC13 NEC BOOTH PRESENTATION 7

JAMSTEC Research Projects 40% Research projects organized by JAMSTEC and Collaboration Projects Projects FY2013 Funded research and Industrial Use Program for Risk Information on Climate Change (SOUSEI) 27% (3 themes) Proposed Research Projects 30% Earth Science 30% (24 Projects) The Strategic Industrial Use 3% Contract Research Projects 30% Total resource 1,300,000 node hour L batch nodes 156 * 24 hour * (365 17.7 maintenance days) 2013 SC13 NEC BOOTH PRESENTATION 8

515 Users 137 Organizations (January 2013) 64 Universities, 15 Labs, 41 Companies, 17 Overseas 250 Users 200 150 海外 Overseas 民間企業 Companies 公的機関 Labs 大学 Universities 100 50 0 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 SC13 NEC BOOTH PRESENTATION 9

Operation Results October 31th, 2013 Over 90% operation was achieved. 2013 SC13 NEC BOOTH PRESENTATION 10

Post-ES2 system 2013 SC13 NEC BOOTH PRESENTATION 11

The Earth Simulator Roadmap FY 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 ES Operation until March 2009 Half system operation, data migration ES2 Future System 6 years operation from March 2009 to February 2015 3rd system (5 years?) FS target 4th system 2013 SC13 NEC BOOTH PRESENTATION 12

Positioning of ES in JAMSTEC long-term vision Observational data of the real world Verification and accuracy improvement of modeling One employment of observation and a simulation is a strong point of JAMSTEC. (i) (ii) (iii) (iv) long-term vision (Next 15 years) Analysis and prediction of global environment change Plate dynamics and disaster prevention research of an earthquake and tsunami Evolution of a life, and the history of a sea life Resource engineering of a material and energy 2013 SC13 NEC BOOTH PRESENTATION 13

Science Targets for the post-es2 Global environment change Submesoscale (ensemble calculations) Extreme Events Prediction, Guerrilla rainstorm, urban heat island Nesting/downscaling data assimilation Earthquake, Tsunami Seismic tomography, plate tectonics, database of earthquake scenarios Information of the optimal evacuation route/place from tsunami Geo-marine-bio science New Evolution of a life, and the history of a sea life Life model building which took in the information on a meta-genome High technology development New Resource engineering of a material and energy High resolution analysis of VIV( Vortex Induced Vibration) problem with long cable The on-the-ocean operation real-time supporting system of a vessel Planning of the energy-saving efficient operation schedule of a vessel 2013 SC13 NEC BOOTH PRESENTATION 14

Request from JAMSTEC Researchers Relative Peak Performance (v.s. ES2) B/F Throughput Relative Memory Capacity Fig.1 Calculation Power and Memory Capacity (v.s. ES2) Fig.2 Architecture classification High Peak Performance In order to advance main researches of the earth science field in post-es2, (1) 10 times or more faster calculation performance, (2) and 5 times or 10 bigger memory capacity are need on the assumption that resource allocation and execution efficiency comparable as ES2. There are many users who are thinking the B/F performance as important and this can be called common view from the position of promoting earth science. 2013 SC13 NEC BOOTH PRESENTATION 15

Key point of post-es2 Peak Performance: 1.3~13PFlops (10 times or more faster than ES2) Memory Capacity: 100~200TB (5 times or 10 times bigger than ES2) Storage Capacity: 10~20PB (5 times or 10 times bigger than ES2) High data bandwidth between CPU and main memory (High B/F) Cooperation with other small systems Shared memory based storage Cooperation job scheduler system The most important problem is a benchmark program for an assessment system. Communication with a main user is required. 2013 SC13 NEC BOOTH PRESENTATION 16

ES2 current operation Summary 6 years operation from March 2009 to February 2015 Top 500 rank :217 th and HPCC G-FFT: 3rd in ember 2012 Calculation resource is centralized on earth science. Future Plan (1) Feasibility Study of exa-scale system in 2018 or later. (2) Post-ES2 system in March 2015. Peak Performance: 1.3~13PFlops (10 times or more faster than ES2) Memory Capacity: 100~200TB (5 times or 10 times bigger than ES2) High data bandwidth between CPU and main memory (High B/F) Cooperation with other small systems The main user required benchmark programs for an assessment system are need. 2013 SC13 NEC BOOTH PRESENTATION 17

Japan Agency for Marine-Earth Science and Technology JAMSTEC Thank you for your kind attention! Visit JAMSTEC at booth #2545 2013 SC13 NEC BOOTH PRESENTATION 18