CSC resources intro: most powerful computational resources in Finland. Tomasz Malkiewicz CSC IT Center for Science Ltd.

Size: px
Start display at page:

Download "CSC resources intro: most powerful computational resources in Finland. Tomasz Malkiewicz CSC IT Center for Science Ltd."

Transcription

1 CSC resources intro: most powerful computational resources in Finland Tomasz Malkiewicz CSC IT Center for Science Ltd.

2 Outline Intro: Why supercomputers? CSC at glance Kajaani Datacenter Finland s new supercomputers Sisu (Cray XC30) Taito (HP cluster) Live demo/hands-on (Taito) CSC resources available for researchers

3 Supercomputing: serial and parallel processing Serial computing single processing unit (core) is used for solving a problem single task performed at once Parallel computing P r o b l e m c o r e r e s u l t multiple cores are used for solving a problem c1 problem is split into smaller subtasks multiple subtasks are performed simultaneously P r o b l e m c2 c3... cn r e s u l t

4 Types of parallel computers Shared memory all the cores can access the whole memory Distributed memory all the cores have their own memory communication is needed in order to access the memory of other cores Current supercomputers combine the distributed memory and shared memory approaches

5 Data parallelism Data is distributed to processor cores Each core performs (nearly) identical tasks with different data Example: summing the elements of a 2D array core 1: = core 2: = core 3: = core 4: = Each core sums it's part of the array The individual sums have to be combined in the end

6 Task parallelism Different cores perform different tasks with the same or different data Example: signal processing, four filters as separate tasks... data Data is processed as segments Core 2 obtains a segment after core 1 has processed it; core 1 starts to process a new segment When the first segment gets to core 4, all cores are busy f i l t e r core 1 f i l t e r core 2 f i l t e r core 3 f i l t e r core 4

7 ns/day Why supercomputers? Lipid MD, 120katoms, PME, Gromacs louhi vuori taito sisu cores

8 CSC and High Performance Computing

9 Supercomputers Supercomputer is a computer at the frontline of contemporary processing capacity particularly speed of calculation. Fastest supercomputers: China s Tianhe-2 (no.1, PFlop/s on the LINPACK benchmark), Finland s Sisu (no. 189, 245 TFlop/s) Finland s Taito (no. 267, 191 TFlop/s).

10 CSC Computing Capacity

11 CSC at glance Founded in 1971 technical support unit for Univac 1108 Connected Finland to Internet in 1988 Reorganized as a company, CSC Scientific Computing Ltd. in 1993 All shares to the Ministry of Education and Culture of Finland in 1997 Operates on a non-profit principle Facilities in Espoo and Kajaani Staff ~250 people

12 CSC s Services FUNET Services Computing Services Application Services Data Services for Science and Culture Information Management Services Universities Polytechnics Ministries Public sector Research centers Companies

13 FUNET and Data services FUNET Connections to all higher education institutions in Finland Haka-identity Management Campus Support The NORDUnet network Data services Digital Preservation and Data for Research Data for Research (TTA), National Digital Library (KDK) Database and information services nic.funet.fi freely distributable files with FTP since 1990 Memory organizations (Finnish university and polytechnics libraries, Finnish National Audiovisual Archive, Finnish National Archives, Finnish National Gallery)

14 Users About 700 active computing projects 3000 researchers use CSC s computing capacity 4250 registered customers Haka-identity federation covers all universities and higher education institutes ( users) Funet - Finnish research and education network Total of end users

15 Users of computing resources by discipline Biosciences Physics Chemistry Total active users Language research Nanoscience Computational fluid dynamics Engineering Computational drug design Earth sciences Other disciplines 153 CSC presentation 15

16 Computing usage by discipline % 5 % 1 % 1 % 5 % 38 % Physics Nanoscience Chemistry 8 % Total 201,6 million billing units Biosciences Astrophysics Computational fluid dynamics 14 % Materials sciences Computational drug design Other disciplines 23 % CSC presentation 16

17 Users of computing resources by organization University of Helsinki Aalto University 65 University of Turku University of Oulu Total active users CSC (Projects) Tampere University of Technology University of Jyväskylä CSC (PRACE) 96 University of Eastern Finland University of Tampere Other CSC presentation 17

18 THE KAJAANI DATACENTER

19

20

21 Power distribution (FinGrid)

22 Kajaani site

23 Sisu now

24 Sisu rear view

25 Taito (HP) hosted in SGI Ice Cube R80

26 SGI Ice Cube R80

27 Taito now

28 Data center specification 2.4 MW combined hybrid capacity 1.4 MW modular free air cooled datacenter Hosting e.g. Taito Upgradable in 700 kw factory built modules Order to acceptance in 5 months 35 kw per extra tall racks 12 kw common in industry PUE forecast < 1.08 (ppue L2,YC ) 1MW HPC datacenter Optimised for Cray super & T-Platforms prototype 90% Water cooling

29 CSC SUPERCOMPUTERS

30 Overview of New Systems Phase 1 Phase 2 Cray HP Cray HP Deployment Done Done Probably 2014 CPU Intel Sandy Bridge GHz Next generation processors Interconnect Aries FDR InfiniBand Aries FDR InfiniBand Cores ~40000 ~17000 Tflops 244 (2x Louhi) 180 (5x Vuori) ~1700 (16x Louhi) ~515 (15x Vuori) Tflops total 424 (3.6x Louhi) ~2215 (20.7x Louhi)

31 IT summary Sisu: Cray XC30 supercomputer Fastest computer in Finland Phase 1: 385 kw, 244 TFlop/s, 16 x 2 GB cores per computing node, 4 x 256 GB login nodes Very high density, large racks

32 IT summary cont. Taito: HP supercluster 1152 Intel CPUs 16 x 4 GB cores per node 16 fat nodes with 16 x16 GB cores per node 6 x 64 GB login nodes 180 TFlop/s 30 kw 47 U racks HPC storage PB of fast parallel storage Supports Cray and HP systems

33 Sisu Phase2: Cray Supercomputer - Future Intel Xeon processor E v3 product family - Cray Aries Interconnect - ~ cores - 64 GB memory per node CSC presentation 33

34 Cray Dragonfly Topology All-to-all network between groups 2 dimensional all-to-all network in a group Source: Robert Alverson, Cray Hot Interconnects 2012 keynote Optical uplinks to inter-group net

35 Cray environment (Sisu) Typical Cray environment Compilers: Cray, Intel and GNU Cray mpi, Cray tuned versions of all usual libraries SLURM Default shell: bash (previously tcsh) Character encoding: UTF-8 Latin-15 alias iso currently kept as is on Vuori and Hippu

36 Sisu (Phase 2) AVX-2 May need to optimize for wider vectors size DDR4 Higher bandwidth, lower power consumption Max job size likely to increase Native SLURM on the way, unlikely to be available after Sisu July hardware update CSC presentation 36

37 Taito Phase2: HP Supercluster - Intel Xeon processor E v2 product family & Future Intel Xeon processor E v3 family - FDR InfiniBand interconnect - ~ cores - Different memory per node sizes: 64, 128, 256 GB and 1.5 TB CSC presentation 37

38 HP Environment (Taito) Compilers: Intel, GNU MPI libraries: Intel, mvapich2, OpenMPI Batch queue: SLURM New more robust module system Only compatible modules shown with module avail Use module spider to see all Disk system changes Default shell: bash (used to be tcsh) Character encoding: UTF-8

39 Core development tools Intel XE Development Tools Compilers C/C++ (icc), Fortran (ifort), Cilk+ Profilers and trace utilities Vtune, Thread checker, MPI checker MKL numerical library Intel MPI library (only on HP) Cray Application Development Environment GNU Compiler Collection Tokens shared between HP and Cray TotalView debugger

40 GFlop/s Performance of numerical libraries DGEMM 1000x1000 Single-Core Performance Turbo Peak (when only 1 core is 3.5GHz * 8 Flop/Hz GHz * 8 Flop/Hz Sandy Bridge 2.7GHz Opteron Barcelona 2.3GHz (Louhi) GHz * 4 Flop/Hz ATLAS 3.8 ATLAS 3.10 ACML 5.2 Ifort 12.1 RedHat 6.2 RPM matmul MKL 12.1 LibSci ACML MKL 11 MKL the best choice on Sandy Bridge, for now. (On Cray, LibSci a good alternative)

41 Modules Some software installations are conflicting with each other For example different versions of programs and libraries Modules facilitate the installation of conflicting packages to a single system User can select the desired environment and tools using module commands Can also be done "on-the-fly"

42 Taito module system module avail shows only those modules that can be loaded to current setup (no conflicts or extra dependencies) Use module spider to list all installed modules and solve the conflicts/dependencies No PrgEnv- modules (on Taito) Changing the compiler module switches also MPI and other compiler specific modules

43 Live demo/hands-on (Taito) ssh trng01 - mkdir own_username cd own_username module avail

44 Live demo/hands-on cont. nano test_hostname.sh CTRL+O; CTRL+X to exit #!/bin/bash -l #SBATCH -J print_hostname #SBATCH -o output.txt #SBATCH -e errors.t #SBATCH -t 00:01:00 #SBATCH -p test # echo "This job runs on the host:"; hostname sbatch test_hostname.sh

45 Live demo/hands-on cont. Check out the output: less output.txt (type q to quit) less errors.t (type q to quit) More examples:

46 Disks space taito.csc.fi login nodes sisu.csc.fi login nodes Your workstation irods client SUI compute nodes compute nodes $TMPDIR $TMPDIR $TMPDIR $TMPDIR $WRKDIR $HOME $TMPDIR New tape $ARCHIVE in Espoo irods interface disk cache icp, iput, ils, irm $USERAPPL $HOME/xyz icp

47 Disks space cont. 4 PB on DDN New $HOME directory (on Lustre) $WRKDIR (not backed up), soft quota: 5 TB HPC_ARCHIVE 2 TB / user, common between Cray and HP Disk space through IDA 1 PB for Univerisities 1 PB for Finnish Academy (SA) 1 PB to be shared between SA and ESFRI more could be requested

48 Moving files, best practices tar & bzip first rsync, not scp rsync -P Blowfish may be faster than AES (CPU bottleneck) Funet FileSender (max 50 GB) Files can be downloaded also with wget Consider: SUI, IDA, irods, batch-like process, staging CSC can help to tune e.g. TCP/IP parameters /pert FUNET backbone 10 SCiP Gbit/s

49 ARCHIVE, dos and don ts Don t put small files in HPC ARCHIVE Small files waste capacity Less than 10 MB is small Keep the number of files small Tar and bzip files Don t use ARCHIVE for incremental backup (store, delete/overwrite, store, ) Space on tape is not freed up until months or years! Maximum file size 300 GB Default quota 2 TB per user

50 CSC RESOURCES AVAILABLE FOR RESEARCHERS

51 Currently available computing resources Sisu > cores, >23TB memory Taito Small and medium-sized tasks Application server Hippu Interactive usage, without job scheduler Postprocessing, e.g. vizualization FGI Cloud Bull system

52 Three service models of cloud computing Software Operating systems Computers and networks SaaS PaaS IaaS

53 Example: Virtualization in Taito Taito cluster: two types of nodes, HPC and cloud HPC node HPC node Cloud node Cloud node Host OS: RHEL Virtual machine Guest OS: Ubuntu Virtual machine Guest OS: Windows

54 Bull In pilot/project until end of August 2014 No guarantee on availability 38 NVIDIA K40 nodes (76 gpus) 12 GB memory per card 45 Intel Xeon Phi nodes (90 Xeon Phis) 16 GB memory per card Energy efficient CPU s CSC presentation 54

55 How to access Bull (plan) Accessing the resources Intel Xeon Phi: ssh taito-mic.csc.fi (TBC) NVIDIA K40: ssh taito-gpu.csc.fi CSC presentation 55

56 Grand Challenges Normal GC (in half a year / year) new CSC resources available for a year no bottom limit for number of cores Special GC call (mainly for Cray) (when needed) possibility for short (day or less) runs with the whole Cray Remember also PRACE/DECI

57 NX screenshot

58 Courses Sisu Phase 2 workshop Late 2014 Taito Phase 2 workshop Spring 2015 CSC courses: CSC HPC Summer School Spring, Autumn, Winter Schools Introduction to Linux and Using CSC Environment Efficiently Parallel Programming CSC presentation 58

59 How to get access to CSC supercomputers? sui.csc.fi (HAKA authentication) Sing up

60 ns/day Summary Sisu supercomputer Installation in July-August 2014 General availability in Q Taito supercluster Installation planned in Q cores Bull system General availability planned for Q nodes with 2 Intel Xeon Phi coprocessors each 38 nodes with 2 NVIDIA Tesla K40 accelerators each DDN HPC storage system Adding 1.9 PB, in Q totaling 4 PB of fast parallel storage Supports Cray and HP systems, aggregate bandwidth > 80 GB/s Gromacs performance Taito Sisu FGI Vuori Louhi CSC presentation 60

Chemistry at CSC CSC Computational Chemistry Spring School

Chemistry at CSC CSC Computational Chemistry Spring School Chemistry at CSC CSC Computational Chemistry Spring School 22.3.2013 Aim: you ll be aware of Lots of software for chemists Lots of computational resources for chemists Several ways to access CSC s resources

More information

CSC Supercomputing Environment

CSC Supercomputing Environment CSC Supercomputing Environment Jussi Enkovaara Slides by T. Zwinger, T. Bergman, and Atte Sillanpää CSC Tieteen tietotekniikan keskus Oy CSC IT Center for Science Ltd. CSC IT Center for Science Ltd. Services:

More information

SCALABLE HYBRID PROTOTYPE

SCALABLE HYBRID PROTOTYPE SCALABLE HYBRID PROTOTYPE Scalable Hybrid Prototype Part of the PRACE Technology Evaluation Objectives Enabling key applications on new architectures Familiarizing users and providing a research platform

More information

Compiling applications for the Cray XC

Compiling applications for the Cray XC Compiling applications for the Cray XC Compiler Driver Wrappers (1) All applications that will run in parallel on the Cray XC should be compiled with the standard language wrappers. The compiler drivers

More information

Basic Specification of Oakforest-PACS

Basic Specification of Oakforest-PACS Basic Specification of Oakforest-PACS Joint Center for Advanced HPC (JCAHPC) by Information Technology Center, the University of Tokyo and Center for Computational Sciences, University of Tsukuba Oakforest-PACS

More information

Our new HPC-Cluster An overview

Our new HPC-Cluster An overview Our new HPC-Cluster An overview Christian Hagen Universität Regensburg Regensburg, 15.05.2009 Outline 1 Layout 2 Hardware 3 Software 4 Getting an account 5 Compiling 6 Queueing system 7 Parallelization

More information

HOKUSAI System. Figure 0-1 System diagram

HOKUSAI System. Figure 0-1 System diagram HOKUSAI System October 11, 2017 Information Systems Division, RIKEN 1.1 System Overview The HOKUSAI system consists of the following key components: - Massively Parallel Computer(GWMPC,BWMPC) - Application

More information

CAS 2K13 Sept Jean-Pierre Panziera Chief Technology Director

CAS 2K13 Sept Jean-Pierre Panziera Chief Technology Director CAS 2K13 Sept. 2013 Jean-Pierre Panziera Chief Technology Director 1 personal note 2 Complete solutions for Extreme Computing b ubullx ssupercomputer u p e r c o p u t e r suite s u e Production ready

More information

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende Introduction to the NCAR HPC Systems 25 May 2018 Consulting Services Group Brian Vanderwende Topics to cover Overview of the NCAR cluster resources Basic tasks in the HPC environment Accessing pre-built

More information

Update on LRZ Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities. 2 Oct 2018 Prof. Dr. Dieter Kranzlmüller

Update on LRZ Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities. 2 Oct 2018 Prof. Dr. Dieter Kranzlmüller Update on LRZ Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities 2 Oct 2018 Prof. Dr. Dieter Kranzlmüller 1 Leibniz Supercomputing Centre Bavarian Academy of Sciences and

More information

Using Cartesius & Lisa

Using Cartesius & Lisa Using Cartesius & Lisa Introductory course for Cartesius & Lisa Jeroen Engelberts jeroen.engelberts@surfsara.nl Consultant Supercomputing Outline SURFsara About us What we do Cartesius and Lisa Architectures

More information

High Performance Computing. What is it used for and why?

High Performance Computing. What is it used for and why? High Performance Computing What is it used for and why? Overview What is it used for? Drivers for HPC Examples of usage Why do you need to learn the basics? Hardware layout and structure matters Serial

More information

User Training Cray XC40 IITM, Pune

User Training Cray XC40 IITM, Pune User Training Cray XC40 IITM, Pune Sudhakar Yerneni, Raviteja K, Nachiket Manapragada, etc. 1 Cray XC40 Architecture & Packaging 3 Cray XC Series Building Blocks XC40 System Compute Blade 4 Compute Nodes

More information

Using CSC Environment Efficiently,

Using CSC Environment Efficiently, Using CSC Environment Efficiently, 13.2.2017 1 Exercises a) Log in to Taito either with your training or CSC user account, either from a terminal (with X11 forwarding) or using NX client b) Go to working

More information

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules

More information

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D. Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic

More information

Using CSC Environment Efficiently,

Using CSC Environment Efficiently, Using CSC Environment Efficiently, 17.09.2018 1 Exercises a) Log in to Taito either with your training or CSC user account, either from a terminal (with X11 forwarding) or using NoMachine client b) Go

More information

Introduction to GALILEO

Introduction to GALILEO November 27, 2016 Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it SuperComputing Applications and Innovation Department

More information

Piz Daint: Application driven co-design of a supercomputer based on Cray s adaptive system design

Piz Daint: Application driven co-design of a supercomputer based on Cray s adaptive system design Piz Daint: Application driven co-design of a supercomputer based on Cray s adaptive system design Sadaf Alam & Thomas Schulthess CSCS & ETHzürich CUG 2014 * Timelines & releases are not precise Top 500

More information

Brand-New Vector Supercomputer

Brand-New Vector Supercomputer Brand-New Vector Supercomputer NEC Corporation IT Platform Division Shintaro MOMOSE SC13 1 New Product NEC Released A Brand-New Vector Supercomputer, SX-ACE Just Now. Vector Supercomputer for Memory Bandwidth

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems

More information

Introduc)on to Hyades

Introduc)on to Hyades Introduc)on to Hyades Shawfeng Dong Department of Astronomy & Astrophysics, UCSSC Hyades 1 Hardware Architecture 2 Accessing Hyades 3 Compu)ng Environment 4 Compiling Codes 5 Running Jobs 6 Visualiza)on

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) PRACE 16th Call Technical Guidelines for Applicants V1: published on 26/09/17 TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) The contributing sites and the corresponding computer systems

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

Introduction to High Performance Computing. Shaohao Chen Research Computing Services (RCS) Boston University

Introduction to High Performance Computing. Shaohao Chen Research Computing Services (RCS) Boston University Introduction to High Performance Computing Shaohao Chen Research Computing Services (RCS) Boston University Outline What is HPC? Why computer cluster? Basic structure of a computer cluster Computer performance

More information

Scientific Computing in practice

Scientific Computing in practice Scientific Computing in practice Summer Kickstart 2017 Ivan Degtyarenko, Janne Blomqvist, Simo Tuomisto, Richard Darst, Mikko Hakala School of Science, Aalto University June 5-7, 2017 slide 1 of 37 What

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

Graham vs legacy systems

Graham vs legacy systems New User Seminar Graham vs legacy systems This webinar only covers topics pertaining to graham. For the introduction to our legacy systems (Orca etc.), please check the following recorded webinar: SHARCNet

More information

The Stampede is Coming: A New Petascale Resource for the Open Science Community

The Stampede is Coming: A New Petascale Resource for the Open Science Community The Stampede is Coming: A New Petascale Resource for the Open Science Community Jay Boisseau Texas Advanced Computing Center boisseau@tacc.utexas.edu Stampede: Solicitation US National Science Foundation

More information

CSC- Bioweek 2018 Using cpouta for cloud computing Kimmo Mattila, Shubham Kapoor, Ari-Matti Saren (Jukka Nousiainen)

CSC- Bioweek 2018 Using cpouta for cloud computing Kimmo Mattila, Shubham Kapoor, Ari-Matti Saren (Jukka Nousiainen) CSC- Bioweek 2018 Using cpouta for cloud computing 8.2.2018 Kimmo Mattila, Shubham Kapoor, Ari-Matti Saren (Jukka Nousiainen) CSC Finnish research, education and public administration ICT knowledge centre

More information

Sisu User Guide 1. Sisu User Guide. Version: First version of the Sisu phase 2 User Guide

Sisu User Guide 1. Sisu User Guide. Version: First version of the Sisu phase 2 User Guide Sisu User Guide 1 Sisu User Guide Version: 24.9.2014 First version of the Sisu phase 2 User Guide Sisu User Guide 2 Table of Contents Sisu User Guide...1 1. Introduction...4 1.1 Sisu supercomputer...4

More information

Introduction to HPC2N

Introduction to HPC2N Introduction to HPC2N Birgitte Brydsø HPC2N, Umeå University 4 May 2017 1 / 24 Overview Kebnekaise and Abisko Using our systems The File System The Module System Overview Compiler Tool Chains Examples

More information

Leibniz Supercomputer Centre. Movie on YouTube

Leibniz Supercomputer Centre. Movie on YouTube SuperMUC @ Leibniz Supercomputer Centre Movie on YouTube Peak Performance Peak performance: 3 Peta Flops 3*10 15 Flops Mega 10 6 million Giga 10 9 billion Tera 10 12 trillion Peta 10 15 quadrillion Exa

More information

Umeå University

Umeå University HPC2N @ Umeå University Introduction to HPC2N and Kebnekaise Jerry Eriksson, Pedro Ojeda-May, and Birgitte Brydsö Outline Short presentation of HPC2N HPC at a glance. HPC2N Abisko, Kebnekaise HPC Programming

More information

Umeå University

Umeå University HPC2N: Introduction to HPC2N and Kebnekaise, 2017-09-12 HPC2N @ Umeå University Introduction to HPC2N and Kebnekaise Jerry Eriksson, Pedro Ojeda-May, and Birgitte Brydsö Outline Short presentation of HPC2N

More information

Introduction of Oakforest-PACS

Introduction of Oakforest-PACS Introduction of Oakforest-PACS Hiroshi Nakamura Director of Information Technology Center The Univ. of Tokyo (Director of JCAHPC) Outline Supercomputer deployment plan in Japan What is JCAHPC? Oakforest-PACS

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI for Physical Scientists Michael Milligan MSI Scientific Computing Consultant Goals Introduction to MSI resources Show you how to access our systems

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster

More information

XSEDE New User Training. Ritu Arora November 14, 2014

XSEDE New User Training. Ritu Arora   November 14, 2014 XSEDE New User Training Ritu Arora Email: rauta@tacc.utexas.edu November 14, 2014 1 Objectives Provide a brief overview of XSEDE Computational, Visualization and Storage Resources Extended Collaborative

More information

Illinois Proposal Considerations Greg Bauer

Illinois Proposal Considerations Greg Bauer - 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and

More information

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing

Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 FAS Research Computing Choosing Resources Wisely Plamen Krastev Office: 38 Oxford, Room 117 Email:plamenkrastev@fas.harvard.edu Objectives Inform you of available computational resources Help you choose appropriate computational

More information

Cray XC Scalability and the Aries Network Tony Ford

Cray XC Scalability and the Aries Network Tony Ford Cray XC Scalability and the Aries Network Tony Ford June 29, 2017 Exascale Scalability Which scalability metrics are important for Exascale? Performance (obviously!) What are the contributing factors?

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

Overview of Tianhe-2

Overview of Tianhe-2 Overview of Tianhe-2 (MilkyWay-2) Supercomputer Yutong Lu School of Computer Science, National University of Defense Technology; State Key Laboratory of High Performance Computing, China ytlu@nudt.edu.cn

More information

Hybrid Warm Water Direct Cooling Solution Implementation in CS300-LC

Hybrid Warm Water Direct Cooling Solution Implementation in CS300-LC Hybrid Warm Water Direct Cooling Solution Implementation in CS300-LC Roger Smith Mississippi State University Giridhar Chukkapalli Cray, Inc. C O M P U T E S T O R E A N A L Y Z E 1 Safe Harbor Statement

More information

Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins

Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins Outline History & Motivation Architecture Core architecture Network Topology Memory hierarchy Brief comparison to GPU & Tilera Programming Applications

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Our Environment Today Your laptops or workstations: only used for portal access Bridges

More information

An Introduction to the SPEC High Performance Group and their Benchmark Suites

An Introduction to the SPEC High Performance Group and their Benchmark Suites An Introduction to the SPEC High Performance Group and their Benchmark Suites Robert Henschel Manager, Scientific Applications and Performance Tuning Secretary, SPEC High Performance Group Research Technologies

More information

DATARMOR: Comment s'y préparer? Tina Odaka

DATARMOR: Comment s'y préparer? Tina Odaka DATARMOR: Comment s'y préparer? Tina Odaka 30.09.2016 PLAN DATARMOR: Detailed explanation on hard ware What can you do today to be ready for DATARMOR DATARMOR : convention de nommage ClusterHPC REF SCRATCH

More information

HPC Architectures. Types of resource currently in use

HPC Architectures. Types of resource currently in use HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D. Resources Current and Future Systems Timothy H. Kaiser, Ph.D. tkaiser@mines.edu 1 Most likely talk to be out of date History of Top 500 Issues with building bigger machines Current and near future academic

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier0) Contributing sites and the corresponding computer systems for this call are: GENCI CEA, France Bull Bullx cluster GCS HLRS, Germany Cray

More information

HPC Capabilities at Research Intensive Universities

HPC Capabilities at Research Intensive Universities HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)

More information

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized

More information

LBRN - HPC systems : CCT, LSU

LBRN - HPC systems : CCT, LSU LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit

More information

Quotations invited. 2. The supplied hardware should have 5 years comprehensive onsite warranty (24 x 7 call logging) from OEM directly.

Quotations invited. 2. The supplied hardware should have 5 years comprehensive onsite warranty (24 x 7 call logging) from OEM directly. Enquiry No: IITK/ME/mkdas/2016/01 May 04, 2016 Quotations invited Sealed quotations are invited for the purchase of an HPC cluster with the specification outlined below. Technical as well as the commercial

More information

Using the IBM Opteron 1350 at OSC. October 19-20, 2010

Using the IBM Opteron 1350 at OSC. October 19-20, 2010 Using the IBM Opteron 1350 at OSC October 19-20, 2010 Table of Contents Hardware Overview The Linux Operating System User Environment and Storage 2 Hardware Overview Hardware introduction Login node configuration

More information

HPC Technology Update Challenges or Chances?

HPC Technology Update Challenges or Chances? HPC Technology Update Challenges or Chances? Swiss Distributed Computing Day Thomas Schoenemeyer, Technology Integration, CSCS 1 Move in Feb-April 2012 1500m2 16 MW Lake-water cooling PUE 1.2 New Datacenter

More information

Pedraforca: a First ARM + GPU Cluster for HPC

Pedraforca: a First ARM + GPU Cluster for HPC www.bsc.es Pedraforca: a First ARM + GPU Cluster for HPC Nikola Puzovic, Alex Ramirez We ve hit the power wall ALL computers are limited by power consumption Energy-efficient approaches Multi-core Fujitsu

More information

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE

More information

Current Status of the Next- Generation Supercomputer in Japan. YOKOKAWA, Mitsuo Next-Generation Supercomputer R&D Center RIKEN

Current Status of the Next- Generation Supercomputer in Japan. YOKOKAWA, Mitsuo Next-Generation Supercomputer R&D Center RIKEN Current Status of the Next- Generation Supercomputer in Japan YOKOKAWA, Mitsuo Next-Generation Supercomputer R&D Center RIKEN International Workshop on Peta-Scale Computing Programming Environment, Languages

More information

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology

Introduction to the SHARCNET Environment May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology Introduction to the SHARCNET Environment 2010-May-25 Pre-(summer)school webinar Speaker: Alex Razoumov University of Ontario Institute of Technology available hardware and software resources our web portal

More information

Parallel Programming on Ranger and Stampede

Parallel Programming on Ranger and Stampede Parallel Programming on Ranger and Stampede Steve Lantz Senior Research Associate Cornell CAC Parallel Computing at TACC: Ranger to Stampede Transition December 11, 2012 What is Stampede? NSF-funded XSEDE

More information

The BioHPC Nucleus Cluster & Future Developments

The BioHPC Nucleus Cluster & Future Developments 1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does

More information

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

PRACE Project Access Technical Guidelines - 19 th Call for Proposals PRACE Project Access Technical Guidelines - 19 th Call for Proposals Peer-Review Office Version 5 06/03/2019 The contributing sites and the corresponding computer systems for this call are: System Architecture

More information

High Performance Computing. What is it used for and why?

High Performance Computing. What is it used for and why? High Performance Computing What is it used for and why? Overview What is it used for? Drivers for HPC Examples of usage Why do you need to learn the basics? Hardware layout and structure matters Serial

More information

PORTING CP2K TO THE INTEL XEON PHI. ARCHER Technical Forum, Wed 30 th July Iain Bethune

PORTING CP2K TO THE INTEL XEON PHI. ARCHER Technical Forum, Wed 30 th July Iain Bethune PORTING CP2K TO THE INTEL XEON PHI ARCHER Technical Forum, Wed 30 th July Iain Bethune (ibethune@epcc.ed.ac.uk) Outline Xeon Phi Overview Porting CP2K to Xeon Phi Performance Results Lessons Learned Further

More information

EN2910A: Advanced Computer Architecture Topic 06: Supercomputers & Data Centers Prof. Sherief Reda School of Engineering Brown University

EN2910A: Advanced Computer Architecture Topic 06: Supercomputers & Data Centers Prof. Sherief Reda School of Engineering Brown University EN2910A: Advanced Computer Architecture Topic 06: Supercomputers & Data Centers Prof. Sherief Reda School of Engineering Brown University Material from: The Datacenter as a Computer: An Introduction to

More information

Introduction to GALILEO

Introduction to GALILEO Introduction to GALILEO Parallel & production environment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Alessandro Grottesi a.grottesi@cineca.it SuperComputing Applications and

More information

Intel Many Integrated Core (MIC) Architecture

Intel Many Integrated Core (MIC) Architecture Intel Many Integrated Core (MIC) Architecture Karl Solchenbach Director European Exascale Labs BMW2011, November 3, 2011 1 Notice and Disclaimers Notice: This document contains information on products

More information

Metrics and Best Practices for Host-based Access Control to Ensure System Integrity and Availability

Metrics and Best Practices for Host-based Access Control to Ensure System Integrity and Availability Metrics and Best Practices for Host-based Access Control to Ensure System Integrity and Availability Urpo Kaila, Marco Passerini and Joni Virtanen CSC Tieteen tietotekniikan keskus Oy CSC IT Center for

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

Designed for Maximum Accelerator Performance

Designed for Maximum Accelerator Performance Designed for Maximum Accelerator Performance A dense, GPU-accelerated cluster supercomputer that delivers up to 329 double-precision GPU teraflops in one rack. This power- and spaceefficient system can

More information

Introduction to PICO Parallel & Production Enviroment

Introduction to PICO Parallel & Production Enviroment Introduction to PICO Parallel & Production Enviroment Mirko Cestari m.cestari@cineca.it Alessandro Marani a.marani@cineca.it Domenico Guida d.guida@cineca.it Nicola Spallanzani n.spallanzani@cineca.it

More information

UAntwerpen, 24 June 2016

UAntwerpen, 24 June 2016 Tier-1b Info Session UAntwerpen, 24 June 2016 VSC HPC environment Tier - 0 47 PF Tier -1 623 TF Tier -2 510 Tf 16,240 CPU cores 128/256 GB memory/node IB EDR interconnect Tier -3 HOPPER/TURING STEVIN THINKING/CEREBRO

More information

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,

More information

Student HPC Hackathon 8/2018

Student HPC Hackathon 8/2018 Student HPC Hackathon 8/2018 J. Simon, C. Plessl 22. + 23. August 2018 J. Simon - Architecture of Parallel Computer Systems SoSe 2018 < 1 > Student HPC Hackathon 8/2018 Get the most performance out of

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo

Overview of Supercomputer Systems. Supercomputing Division Information Technology Center The University of Tokyo Overview of Supercomputer Systems Supercomputing Division Information Technology Center The University of Tokyo Supercomputers at ITC, U. of Tokyo Oakleaf-fx (Fujitsu PRIMEHPC FX10) Total Peak performance

More information

CSCS Proposal writing webinar Technical review. 12th April 2015 CSCS

CSCS Proposal writing webinar Technical review. 12th April 2015 CSCS CSCS Proposal writing webinar Technical review 12th April 2015 CSCS Agenda Tips for new applicants CSCS overview Allocation process Guidelines Basic concepts Performance tools Demo Q&A open discussion

More information

CRAY XK6 REDEFINING SUPERCOMPUTING. - Sanjana Rakhecha - Nishad Nerurkar

CRAY XK6 REDEFINING SUPERCOMPUTING. - Sanjana Rakhecha - Nishad Nerurkar CRAY XK6 REDEFINING SUPERCOMPUTING - Sanjana Rakhecha - Nishad Nerurkar CONTENTS Introduction History Specifications Cray XK6 Architecture Performance Industry acceptance and applications Summary INTRODUCTION

More information

Fujitsu s Technologies to the K Computer

Fujitsu s Technologies to the K Computer Fujitsu s Technologies to the K Computer - a journey to practical Petascale computing platform - June 21 nd, 2011 Motoi Okuda FUJITSU Ltd. Agenda The Next generation supercomputer project of Japan The

More information

The Mont-Blanc Project

The Mont-Blanc Project http://www.montblanc-project.eu The Mont-Blanc Project Daniele Tafani Leibniz Supercomputing Centre 1 Ter@tec Forum 26 th June 2013 This project and the research leading to these results has received funding

More information

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU

Parallel Applications on Distributed Memory Systems. Le Yan HPC User LSU Parallel Applications on Distributed Memory Systems Le Yan HPC User Services @ LSU Outline Distributed memory systems Message Passing Interface (MPI) Parallel applications 6/3/2015 LONI Parallel Programming

More information

HPC-CINECA infrastructure: The New Marconi System. HPC methods for Computational Fluid Dynamics and Astrophysics Giorgio Amati,

HPC-CINECA infrastructure: The New Marconi System. HPC methods for Computational Fluid Dynamics and Astrophysics Giorgio Amati, HPC-CINECA infrastructure: The New Marconi System HPC methods for Computational Fluid Dynamics and Astrophysics Giorgio Amati, g.amati@cineca.it Agenda 1. New Marconi system Roadmap Some performance info

More information

Comet Virtualization Code & Design Sprint

Comet Virtualization Code & Design Sprint Comet Virtualization Code & Design Sprint SDSC September 23-24 Rick Wagner San Diego Supercomputer Center Meeting Goals Build personal connections between the IU and SDSC members of the Comet team working

More information

Fujitsu s Approach to Application Centric Petascale Computing

Fujitsu s Approach to Application Centric Petascale Computing Fujitsu s Approach to Application Centric Petascale Computing 2 nd Nov. 2010 Motoi Okuda Fujitsu Ltd. Agenda Japanese Next-Generation Supercomputer, K Computer Project Overview Design Targets System Overview

More information

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato *1, Shuichi Ihara *2, Satoshi Matsuoka *1 *1 Tokyo Institute

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

GROMACS Performance Benchmark and Profiling. August 2011

GROMACS Performance Benchmark and Profiling. August 2011 GROMACS Performance Benchmark and Profiling August 2011 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information

Tianhe-2, the world s fastest supercomputer. Shaohua Wu Senior HPC application development engineer

Tianhe-2, the world s fastest supercomputer. Shaohua Wu Senior HPC application development engineer Tianhe-2, the world s fastest supercomputer Shaohua Wu Senior HPC application development engineer Inspur Inspur revenue 5.8 2010-2013 6.4 2011 2012 Unit: billion$ 8.8 2013 21% Staff: 14, 000+ 12% 10%

More information

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Quinn Mitchell HPC UNIX/LINUX Storage Systems ORNL is managed by UT-Battelle for the US Department of Energy U.S. Department

More information

CALMIP : HIGH PERFORMANCE COMPUTING

CALMIP : HIGH PERFORMANCE COMPUTING CALMIP : HIGH PERFORMANCE COMPUTING Nicolas.renon@univ-tlse3.fr Emmanuel.courcelle@inp-toulouse.fr CALMIP (UMS 3667) Espace Clément Ader www.calmip.univ-toulouse.fr CALMIP :Toulouse University Computing

More information

How to Use a Supercomputer - A Boot Camp

How to Use a Supercomputer - A Boot Camp How to Use a Supercomputer - A Boot Camp Shelley Knuth Peter Ruprecht shelley.knuth@colorado.edu peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Today we will discuss: Who Research Computing is

More information

Real Parallel Computers

Real Parallel Computers Real Parallel Computers Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel Computing 2005 Short history

More information

Game-changing Extreme GPU computing with The Dell PowerEdge C4130

Game-changing Extreme GPU computing with The Dell PowerEdge C4130 Game-changing Extreme GPU computing with The Dell PowerEdge C4130 A Dell Technical White Paper This white paper describes the system architecture and performance characterization of the PowerEdge C4130.

More information

CSC computing resources for GIS Kylli Ek, Eduardo Gonzalez, CSC

CSC computing resources for GIS Kylli Ek, Eduardo Gonzalez, CSC CSC computing resources for GIS Kylli Ek, Eduardo Gonzalez, CSC CSC, 8.10.20018 CSC Suomalainen tutkimuksen, koulutuksen, kulttuurin ja julkishallinnon ICT-osaamiskeskus Non-profit state organization with

More information

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014 InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment TOP500 Supercomputers, June 2014 TOP500 Performance Trends 38% CAGR 78% CAGR Explosive high-performance

More information