High Performance Computing at the Jülich Supercomputing Center

Size: px
Start display at page:

Download "High Performance Computing at the Jülich Supercomputing Center"

Transcription

1 Mitglied der Helmholtz-Gemeinschaft High Performance Computing at the Jülich Supercomputing Center Jutta Docter Institute for Advanced Simulation (IAS) Jülich Supercomputing Centre (JSC)

2 Overview Jülich Supercomputing Center at Forschungszentrum Jülich JUQUEEN Installation Administration Usage Applications Fileserver PRACE (Partnership for Advanced Computing in Europe) DeIC, , Jutta Docter, JSC 2

3 Where is Jülich? Jülich * DeIC, , Jutta Docter, JSC 3

4 Forschungszentrum Jülich (FZJ) JSC DeIC, , Jutta Docter, JSC 4

5 FZJ at a Glance (2012) Budget: 557 mio. (thereof third-party funding: 173 mio. ) Staff: 5,200 (thereof 1,650 scientists) Visiting scientists: 860 from 40 countries Trainees: 90 per year Publications: 2,200 per year Protective rights and licences: 16,892 Industry cooperations: 363 Research fields: health, energy and environment, information technology; key technologies for tomorrow DeIC, , Jutta Docter, JSC 5

6 Jülich Supercomputing Centre (JSC) DeIC, , Jutta Docter, JSC 6

7 Jülich Supercomputing Centre Supercomputer operation for: Centre FZJ Regional JARA National NIC, GCS Europe PRACE, EU projects Application support Traditional and SimLab support model Scientific visualization Peer review support and coordination R&D work Methods and algorithms, performance analysis and tools Community data management service Computer architectures, Exascale Laboratories with IBM, Intel, NVIDIA Education and Training DeIC, , Jutta Docter, JSC 7

8 Gauss Centre for Supercomputing German representative in PRACE Alliance of the three German national supercomputing centres Jülich Supercomputing Centre (JSC) Leibniz-Rechenzentrum (LRZ), Munich Höchstleistungsrechenzentrum Stuttgart (HLRS) Support of computational science through Multi-Petaflop/s supercomputers Multi-Petabyte storage Multi-Gigabit networking infrastructure Large-Scale projects DeIC, , Jutta Docter, JSC 8

9 User JSC - Overview Cross-Sectional Teams Methods& Algorithms Application Optimization Parallel Performance Simulation Laboratories Biology Climate Science Molecular Systems Plasma Physics Solid & Fluid Engeneering DeIC, , Jutta Docter, JSC 9

10 JSC Supercomputer Systems: Dual Track Approach 2004 IBM Power 4+ JUMP, 9 TFlop/s IBM Power 6 JUMP, 9 TFlop/s IBM Blue Gene/L JUBL, 45 TFlop/s 2009 JUROPA 200 TFlop/s HPC-FF 100 TFlop/s IBM Blue Gene/P JUGENE, 1 PFlop/s JUROPA++ Cluster, 1-2 PFlop/s + Booster General-Purpose File Server GPFS, Lustre Highly-Scalable IBM Blue Gene/Q JUQUEEN 5.9 PFlop/s DeIC, , Jutta Docter, JSC 11

11 Transition: JUGENE JUQUEEN JUGENE Feb 2012 Mar 2012 Apr 2012 May 2012 Jun 2012 Jul 2012 BG/P Production User Access Production Production Production Hardware Installation JUQUEEN Apr 2012 May 2012 Jun 2012 Jul 2012 Aug 2012 Sep 2012 Oct 2012 Nov 2012 Dec 2012 Jan 2013 Feb 2013 DeIC, , Jutta Docter, JSC 12

12 JUQUEEN, 8 Racks, April 2012 DeIC, , Jutta Docter, JSC 13

13 JUQUEEN, 24 Racks, November 2012 Top 500, Nov. 2012: Pos. 5 DeIC, , Jutta Docter, JSC 14

14 JUQUEEN, 28 Racks, January 2013 Top 500, Jun. 2013: Pos. 7 DeIC, , Jutta Docter, JSC 15

15 JUQUEEN - Hardware - Cables 58.9 t of computational hardware 3,584 Torus data cables (29.7 km) GE cables (18 km) 84 Ethernet cables (3.4 km) 1 DeIC, , Jutta Docter, JSC 16

16 JUQUEEN Hardware - Power 4.4 km power cables, total weight 7.9 t 112 power connections (3 phases) 336 circuit breakers 255 m steel for 7 frames (rack: 2t) 200 m cable trays DeIC, , Jutta Docter, JSC 17

17 JUQUEEN - Hardware - Water 280 m pipes (stainless steel), separate valves per row 2 pumps with a max. flow rate of 210 m³ each Special pump control system for redundancy 18 C supply temperature - demineralized water 27 C return temperature Supply rate: 28 gal/min per rack 1 DeIC, , Jutta Docter, JSC 18

18 DeIC, , Jutta Docter, JSC 19

19 Source: IBM DeIC, , Jutta Docter, JSC 24

20 JUQUEEN Configuration 28 Racks Blue Gene/Q (7 x 4 racks) Nodes: 28,672 (Cores: 458,752) Nodeboards: 896 Main memory: 448 TB (16 GB per node) Overall peak performance: 5.9 Petaflops Power consumption: kw per rack (av. 63 kw) Processor: 16 cores per node, IBM PowerPC A2 (1.6 GHz, 64 bit) 16-way SMP processor, quad floating point unit Internal network: 5D Torus (A,B,C,D,E) - 40 GBps, 2.5 μsec latency DeIC, , Jutta Docter, JSC 25

21 Blue Gene/Q node board (with water cooling) Source: IBM DeIC, , Jutta Docter, JSC 26

22 Blue Gene/Q Compute Card Heat spreader DeIC, , Jutta Docter, JSC 27

23 JUQUEEN I/O Drawer DeIC, , Jutta Docter, JSC 28

24 JUQUEEN Environment 248 I/O Nodes (8 per drawer, on top of racks): 27 x x 32 on IO rich rack ( 1 ION per nodeboard) connected to 2 CISCO Switches Nexus Front-End Nodes (Red Hat) for user login and data transfer: juqueen@fz-juelich.de 1 Service Node (Red Hat) for BG software and DB2 database 1 Backup Service Node - to be manually activated IBM p7 740, 8 cores (3.55 GHz), 128 GB memory, local storage device DS5020 (16 TB) DeIC, , Jutta Docter, JSC 29

25 JUQUEEN Environment JUQUEEN BG Control-System Backup Service Node Service Node FrontEnd FrontEnd RAID DB2 runjob Fileserver JUST Nexus Switches FrontEnd FrontEnd SSH DeIC, , Jutta Docter, JSC 30

26 JUQUEEN Cooling System R03 R13 R23 R33 R43 R53 R63 R02 R12 R22 R32 R42 R52 R62 R01 R11 R21 R31 R41 R51 R61 R00 R10 R20 R30 R40 R50 R60 Heat Exchanger Operation: 5,5 bar Down at: 0,5 bar Operation: 2,3 bar Warning: 1,5 bar No autom. shutdown DeIC, , Jutta Docter, JSC deionized water 32 P P 3 m³/min.

27 Blue Gene/Q - Coolant Monitor per Rack DeIC, , Jutta Docter, JSC 33

28 Cooling Issues Learning Curve! Cooling problems result in complete outage! External water issues, far to hot (32 C) or to cold (7 C) Pressure loss during pump switch-over reconfiguration to run both pumps in parallel Pressure loss during work in outer circle Loss of water pressure, warnings, air bulbs in system, sensor wrong alignment? replacement of empty nodeboards? no, but prefill installation of airing tools to eliminate bulbs checking of pressure variations DeIC, , Jutta Docter, JSC 34

29 Hardware Failures and Monitoring Node or nodeboard failure nodeboard in error (midplane not available) replace No refund for aborted user jobs (write checkpoints!) Correctable errors run diagnostics replace(?) Cable failure use integrated redundancy Preventative: run diagnostics as batch jobs BG/Q TEAL generates Alerts + sends messages to sysadmin Icinga Server, JSC checking scripts: send messages to sysadmin alarm people on duty (8:00-24:00, Mon-Sat) DeIC, , Jutta Docter, JSC 35

30 Hardware Replacements Regular: about 57 nodes and 7 nodeboards per month (of Nodes - MTTF >1 year) overhead: diagnostic, report, order, replace, watch, doc., Some IO drawers and midplanes (delicate pins) Service card outage on master clock rack single point of failure Additional preventive replacements: Manufacturing defects on G compute nodes Bulk Power Modules (load distribution) DeIC, , Jutta Docter, JSC 36

31 System Administration JSC Blue Gene Administration Team (3.5 PM) System administrator of the week JSC application support team IBM coverage: Mon - Fri 8:00 17:00 On site HW + SW personnel DeIC, , Jutta Docter, JSC 37

32 (New) Software Issues Blue Gene Software Updates (+ efixes) June 2012 V Jan V Aug V Mellanox optical driver firmware update (3x) GPFS, LoadLeveler, Compiler,. IO nodes aborted with Out of memory adjustment / optimization of parameters (pagepool, buffers, etc ) DeIC, , Jutta Docter, JSC 38

33 IO Performance - IO Nodes to GPFS IO optimization, GPFS ( ) ongoing at FZJ (and Argonne Nat. Lab.) Read is worse than write - bad to shared files Testing lower layers first network setup is o.k. Optimizing GPFS performance parameters Setup of special GPFS / network testing environment Regular teleconferences with IBM need to bring experts together! DeIC, , Jutta Docter, JSC 39

34 JSC LinkTest: Blue Gene torus link bandwidth tester All-to-all ping-pong test Bandwidth distribution Intra-node communication Communication via link A, B, C, D Communication via link E DeIC, , Jutta Docter, JSC 40

35 JSC llview DeIC, , Jutta Docter, JSC 41

36 LoadLeveler Batch Scheduler - Job Classes Class/Queue Max. nodes Wall Clock Limit Priority m048 / m / h on demand only m h on demand only m h m h m h m h m h n h n h n min n min serial 0 60 min juqueenx DeIC, , Jutta Docter, JSC 42

37 Blue Gene Navigator n004, n008 (ongoing replacements) n001, n002 DeIC, , Jutta Docter, JSC 43

38 LoadLeveler - New version First and biggest BG/Q with LoadLevler testing on site! Negotiator dies (and restarts automatically) Reservations bigger than one midplane are not honored Jobs run on reserved partitions Erroneously waiting for missing cables, restart helps Not scheduling jobs although resources are free (not checking all possibilities in 5D Torus) stabilizing August 2013 DeIC, , Jutta Docter, JSC 44

39 DeIC, , Jutta Docter, JSC 45

40 JUQUEEN User Groups Germany May 2013 DeIC, , Jutta Docter, JSC 48

41 Tools are ported Linktest Scalasca (application profiles and traces) SIONlib Trace analysis by Scalasca on Blue Gene/Q DeIC, , Jutta Docter, JSC 49

42 SIONlib - Scalable I/O library for parallel access to task-local files Supports writing and reading binary data to or from thousands of processors into a single or a small number of physical files General structure of a SION file: Starting positions of the blocks are aligned to the filesystem blocksize All writing and reading is done asynchronously Allows parallel access using MPI, OpenMp, or their combination, and sequential access for post-processing utilities DeIC, , Jutta Docter, JSC 50

43 High-Q Club Highest Scaling Codes on JUQUEEN Goals: Promote the idea of exascale capability computing Showcase for codes that can utilise the entire 28-racks Invest in tuning and scaling the codes Show that they are capable of using all 458,752 cores Aiming at more than 1 million concurrent threads Use a variety of programming languages and parallelisation models, demonstrating indiv. approaches to reach that goal Important milestone in application development towards future HPC (exascale) systems DeIC, , Jutta Docter, JSC 51

44 High-Q Club Members dynqcd Lattice Quantum Chromodynamics with dynamical fermions, JSC Gysela A gyrokinetic code for modelling fusion core plasmas, CEA, France PEPC A particle tree code for solving the N-body problem for Coulomb, gravitational and hydrodynamic systems, JSC PMG+PFASST A space-time parallel multilevel solver, Univ. of Wuppertal / LBNL Terra-Neo A multigrid solver for geophysics applications from the Univ. of Erlangen. walberla A widely applicable Lattice Boltzmann solver from the Univ. of Erlangen. DeIC, , Jutta Docter, JSC 52

45 JUQUEEN Porting and Tuning Workshop First workshop February 2013 Second workshop February DeIC, , Jutta Docter, JSC 53

46 Disks, Tapes, Robots DeIC, , Jutta Docter, JSC 54

47 Data Growth at JSC User Data GPFS Userdata GPFS 13000, , , , , ,00 Terabyte 7000, , , , , , ,00 0, Disk Tape DeIC, , Jutta Docter, JSC 55

48 Jülich Storage Server - JUST (~ 10 PB online) $WORK $HOME $ARCH 7.1 PB $DATA 3 x 350 TB 2 x 700 TB 1 x 350 TB 160 GB/sec Management Server JUVIS JUDGE JUQUEEN JUROPA TSM Server DeIC, , Jutta Docter, JSC 57

49 Tape Storage Tapes (44.5 PB) 3000 TSM Clients (Campus) IBM TSM Server / ACSLS Server JUQUEEN Juropa Judge SAN SAN Separate buildings Data Migration Data Archive Data Backup Priv. Tape Network ORACLE SL T10K A/B/C ORACLE SL T10K B/C Cartridge Capacity: max. 5 TB Transfer Rate: up to 240 MB/sec DeIC, , Jutta Docter, JSC 60

50 PRACE Partnership for Advanced Computing in Europe Consists of 25 European partner states, each represented by one institution DeIC, , Jutta Docter, JSC 61

51 Goals Prepares the creation of a persistent, sustainable pan-european HPC service Prepares the establishment of Tier-0 supercomputing centres at different European sites Defines and establishes a legal and organisational structure involving HPC centres, national funding agencies, and scientific user communities Develops funding and usage models and establishes a peer review process Provides training for European scientists and creates a permanent education programme DeIC, , Jutta Docter, JSC 62

52 Tier-0 Systems - today Curie, Bull Bullx cluster (France) Fermi, IBM Blue Gene/Q (Italy) Hermit, Cray XE6 (Germany) JUQUEEN, IBM Blue Gene/Q (Germany) MareNostrum, IBM System X idataplex (Spain) SuperMUC, IBM System X idataplex (Germany) DeIC, , Jutta Docter, JSC 63

53 JUQUEEN Granted PRACE Projects - May 2013 DeIC, , Jutta Docter, JSC 64

54 PRACE Calls for Proposals for Project Access Twice per year: Call opens in February: Access is provided starting September of the same year. Call opens in September: Access is provided starting March of the next year. The PRACE 8th Call for Project Access is now open until 15 October DeIC, , Jutta Docter, JSC 66

55 QUESTIONS? DeIC, , Jutta Docter, JSC 67

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Thursday, Nov 26 13:00-13:30

More information

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich

Welcome to the. Jülich Supercomputing Centre. D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Mitglied der Helmholtz-Gemeinschaft Welcome to the Jülich Supercomputing Centre D. Rohe and N. Attig Jülich Supercomputing Centre (JSC), Forschungszentrum Jülich Schedule: Monday, May 18 13:00-13:30 Welcome

More information

NVIDIA Application Lab at Jülich

NVIDIA Application Lab at Jülich Mitglied der Helmholtz- Gemeinschaft NVIDIA Application Lab at Jülich Dirk Pleiter Jülich Supercomputing Centre (JSC) Forschungszentrum Jülich at a Glance (status 2010) Budget: 450 mio Euro Staff: 4,800

More information

Jülich Supercomputing Centre

Jülich Supercomputing Centre Mitglied der Helmholtz-Gemeinschaft Jülich Supercomputing Centre Norbert Attig Jülich Supercomputing Centre (JSC) Forschungszentrum Jülich (FZJ) Aug 26, 2009 DOAG Regionaltreffen NRW 2 Supercomputing at

More information

I/O at JSC. I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O. Wolfgang Frings

I/O at JSC. I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O. Wolfgang Frings Mitglied der Helmholtz-Gemeinschaft I/O at JSC I/O Infrastructure Workloads, Use Case I/O System Usage and Performance SIONlib: Task-Local I/O Wolfgang Frings W.Frings@fz-juelich.de Jülich Supercomputing

More information

I/O Monitoring at JSC, SIONlib & Resiliency

I/O Monitoring at JSC, SIONlib & Resiliency Mitglied der Helmholtz-Gemeinschaft I/O Monitoring at JSC, SIONlib & Resiliency Update: I/O Infrastructure @ JSC Update: Monitoring with LLview (I/O, Memory, Load) I/O Workloads on Jureca SIONlib: Task-Local

More information

JÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich

JÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich JÜLICH SUPERCOMPUTING CENTRE Site Introduction 09.04.2018 Michael Stephan JSC @ Forschungszentrum Jülich FORSCHUNGSZENTRUM JÜLICH Research Centre Jülich One of the 15 Helmholtz Research Centers in Germany

More information

Parallel I/O on JUQUEEN

Parallel I/O on JUQUEEN Parallel I/O on JUQUEEN 4. Februar 2014, JUQUEEN Porting and Tuning Workshop Mitglied der Helmholtz-Gemeinschaft Wolfgang Frings w.frings@fz-juelich.de Jülich Supercomputing Centre Overview Parallel I/O

More information

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE

MPI RUNTIMES AT JSC, NOW AND IN THE FUTURE , NOW AND IN THE FUTURE Which, why and how do they compare in our systems? 08.07.2018 I MUG 18, COLUMBUS (OH) I DAMIAN ALVAREZ Outline FZJ mission JSC s role JSC s vision for Exascale-era computing JSC

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

Výpočetní zdroje IT4Innovations a PRACE pro využití ve vědě a výzkumu

Výpočetní zdroje IT4Innovations a PRACE pro využití ve vědě a výzkumu Výpočetní zdroje IT4Innovations a PRACE pro využití ve vědě a výzkumu Filip Staněk Seminář gridového počítání 2011, MetaCentrum, Brno, 7. 11. 2011 Introduction I Project objectives: to establish a centre

More information

HPC Resources & Training

HPC Resources & Training www.bsc.es HPC Resources & Training in the BSC, the RES and PRACE Montse González Ferreiro RES technical and training coordinator + Facilities + Capacity How fit together the BSC, the RES and PRACE? TIER

More information

Porting Scientific Applications to OpenPOWER

Porting Scientific Applications to OpenPOWER Porting Scientific Applications to OpenPOWER Dirk Pleiter Forschungszentrum Jülich / JSC #OpenPOWERSummit Join the conversation at #OpenPOWERSummit 1 JSC s HPC Strategy IBM Power 6 JUMP, 9 TFlop/s Intel

More information

Blue Gene/Q. Hardware Overview Michael Stephan. Mitglied der Helmholtz-Gemeinschaft

Blue Gene/Q. Hardware Overview Michael Stephan. Mitglied der Helmholtz-Gemeinschaft Blue Gene/Q Hardware Overview 02.02.2015 Michael Stephan Blue Gene/Q: Design goals System-on-Chip (SoC) design Processor comprises both processing cores and network Optimal performance / watt ratio Small

More information

Recent Developments in Supercomputing

Recent Developments in Supercomputing John von Neumann Institute for Computing Recent Developments in Supercomputing Th. Lippert published in NIC Symposium 2008, G. Münster, D. Wolf, M. Kremer (Editors), John von Neumann Institute for Computing,

More information

I/O and Scheduling aspects in DEEP-EST

I/O and Scheduling aspects in DEEP-EST I/O and Scheduling aspects in DEEP-EST Norbert Eicker Jülich Supercomputing Centre & University of Wuppertal The research leading to these results has received funding from the European Community's Seventh

More information

SuperMUC. PetaScale HPC at the Leibniz Supercomputing Centre (LRZ) Top 500 Supercomputer (Juni 2012)

SuperMUC. PetaScale HPC at the Leibniz Supercomputing Centre (LRZ) Top 500 Supercomputer (Juni 2012) SuperMUC PetaScale HPC at the Leibniz Supercomputing Centre (LRZ) Dieter Kranzlmüller Munich Network Management Team Ludwig Maximilians Universität München (LMU) & Leibniz Supercomputing Centre of the

More information

The Energy Challenge in HPC

The Energy Challenge in HPC ARNDT BODE Professor Arndt Bode is the Chair for Computer Architecture at the Leibniz-Supercomputing Center. He is Full Professor for Informatics at TU Mü nchen. His main research includes computer architecture,

More information

update: HPC, Data Center Infrastructure and Machine Learning HPC User Forum, February 28, 2017 Stuttgart

update: HPC, Data Center Infrastructure and Machine Learning HPC User Forum, February 28, 2017 Stuttgart GCS@LRZ update: HPC, Data Center Infrastructure and Machine Learning HPC User Forum, February 28, 2017 Stuttgart Arndt Bode Chairman of the Board, Leibniz-Rechenzentrum of the Bavarian Academy of Sciences

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

Parallel & Scalable Machine Learning Introduction to Machine Learning Algorithms

Parallel & Scalable Machine Learning Introduction to Machine Learning Algorithms Parallel & Scalable Machine Learning Introduction to Machine Learning Algorithms Dr. Ing. Morris Riedel Adjunct Associated Professor School of Engineering and Natural Sciences, University of Iceland Research

More information

HPC IN EUROPE. Organisation of public HPC resources

HPC IN EUROPE. Organisation of public HPC resources HPC IN EUROPE Organisation of public HPC resources Context Focus on publicly-funded HPC resources provided primarily to enable scientific research and development at European universities and other publicly-funded

More information

Scalasca support for Intel Xeon Phi. Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany

Scalasca support for Intel Xeon Phi. Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany Scalasca support for Intel Xeon Phi Brian Wylie & Wolfgang Frings Jülich Supercomputing Centre Forschungszentrum Jülich, Germany Overview Scalasca performance analysis toolset support for MPI & OpenMP

More information

HPC projects. Grischa Bolls

HPC projects. Grischa Bolls HPC projects Grischa Bolls Outline Why projects? 7th Framework Programme Infrastructure stack IDataCool, CoolMuc Mont-Blanc Poject Deep Project Exa2Green Project 2 Why projects? Pave the way for exascale

More information

The DEEP (and DEEP-ER) projects

The DEEP (and DEEP-ER) projects The DEEP (and DEEP-ER) projects Estela Suarez - Jülich Supercomputing Centre BDEC for Europe Workshop Barcelona, 28.01.2015 The research leading to these results has received funding from the European

More information

Pushing the Limits. ADSM Symposium Sheelagh Treweek September 1999 Oxford University Computing Services 1

Pushing the Limits. ADSM Symposium Sheelagh Treweek September 1999 Oxford University Computing Services 1 Pushing the Limits ADSM Symposium Sheelagh Treweek sheelagh.treweek@oucs.ox.ac.uk September 1999 Oxford University Computing Services 1 Overview History of ADSM services at Oxford October 1995 - started

More information

Workshop: Innovation Procurement in Horizon 2020 PCP Contractors wanted

Workshop: Innovation Procurement in Horizon 2020 PCP Contractors wanted Workshop: Innovation Procurement in Horizon 2020 PCP Contractors wanted Supercomputing Centre Institute for Advanced Simulation / FZJ 1 www.prace-ri.eu Challenges: Aging Society Energy Food How we can

More information

Systems Architectures towards Exascale

Systems Architectures towards Exascale Systems Architectures towards Exascale D. Pleiter German-Indian Workshop on HPC Architectures and Applications Pune 29 November 2016 Outline Introduction Exascale computing Technology trends Architectures

More information

IS-ENES2 Kick-off meeting Sergi Girona, Chair of the Board of Directors

IS-ENES2 Kick-off meeting Sergi Girona, Chair of the Board of Directors IS-ENES2 Kick-off meeting Sergi Girona, Chair of the Board of Directors CNRS, Meudon Bellevue, Paris, 28-May-2013 The HPC European e-infrastructure (ESFRI) 25 members, AISBL since 2010 530 M! for 2010-2015

More information

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016 Analyzing the High Performance Parallel I/O on LRZ HPC systems Sandra Méndez. HPC Group, LRZ. June 23, 2016 Outline SuperMUC supercomputer User Projects Monitoring Tool I/O Software Stack I/O Analysis

More information

The IBM Blue Gene/Q: Application performance, scalability and optimisation

The IBM Blue Gene/Q: Application performance, scalability and optimisation The IBM Blue Gene/Q: Application performance, scalability and optimisation Mike Ashworth, Andrew Porter Scientific Computing Department & STFC Hartree Centre Manish Modani IBM STFC Daresbury Laboratory,

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier0) Contributing sites and the corresponding computer systems for this call are: GENCI CEA, France Bull Bullx cluster GCS HLRS, Germany Cray

More information

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014 InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment TOP500 Supercomputers, June 2014 TOP500 Performance Trends 38% CAGR 78% CAGR Explosive high-performance

More information

Update on LRZ Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities. 2 Oct 2018 Prof. Dr. Dieter Kranzlmüller

Update on LRZ Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities. 2 Oct 2018 Prof. Dr. Dieter Kranzlmüller Update on LRZ Leibniz Supercomputing Centre of the Bavarian Academy of Sciences and Humanities 2 Oct 2018 Prof. Dr. Dieter Kranzlmüller 1 Leibniz Supercomputing Centre Bavarian Academy of Sciences and

More information

High Performance Computing from an EU perspective

High Performance Computing from an EU perspective High Performance Computing from an EU perspective DEISA PRACE Symposium 2010 Barcelona, 10 May 2010 Kostas Glinos European Commission - DG INFSO Head of Unit GÉANT & e-infrastructures 1 "The views expressed

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

Blue Gene/Q A system overview

Blue Gene/Q A system overview Mitglied der Helmholtz-Gemeinschaft Blue Gene/Q A system overview M. Stephan Outline Blue Gene/Q hardware design Processor Network I/O node Jülich Blue Gene/Q configurations (JUQUEEN) Blue Gene/Q software

More information

Outline. Execution Environments for Parallel Applications. Supercomputers. Supercomputers

Outline. Execution Environments for Parallel Applications. Supercomputers. Supercomputers Outline Execution Environments for Parallel Applications Master CANS 2007/2008 Departament d Arquitectura de Computadors Universitat Politècnica de Catalunya Supercomputers OS abstractions Extended OS

More information

NCEP HPC Transition. 15 th ECMWF Workshop on the Use of HPC in Meteorology. Allan Darling. Deputy Director, NCEP Central Operations

NCEP HPC Transition. 15 th ECMWF Workshop on the Use of HPC in Meteorology. Allan Darling. Deputy Director, NCEP Central Operations NCEP HPC Transition 15 th ECMWF Workshop on the Use of HPC Allan Darling Deputy Director, NCEP Central Operations WCOSS NOAA Weather and Climate Operational Supercomputing System CURRENT OPERATIONAL CHALLENGE

More information

Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov

Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright njwright @ lbl.gov NERSC- National Energy Research Scientific Computing Center Mission: Accelerate the pace of scientific discovery

More information

Carlo Cavazzoni, HPC department, CINECA

Carlo Cavazzoni, HPC department, CINECA Introduction to Shared memory architectures Carlo Cavazzoni, HPC department, CINECA Modern Parallel Architectures Two basic architectural scheme: Distributed Memory Shared Memory Now most computers have

More information

Cluster Network Products

Cluster Network Products Cluster Network Products Cluster interconnects include, among others: Gigabit Ethernet Myrinet Quadrics InfiniBand 1 Interconnects in Top500 list 11/2009 2 Interconnects in Top500 list 11/2008 3 Cluster

More information

CINECA and the European HPC Ecosystem

CINECA and the European HPC Ecosystem CINECA and the European HPC Ecosystem Giovanni Erbacci Supercomputing, Applications and Innovation Department, CINECA g.erbacci@cineca.it Enabling Applications on Intel MIC based Parallel Architectures

More information

ALCF Argonne Leadership Computing Facility

ALCF Argonne Leadership Computing Facility ALCF Argonne Leadership Computing Facility ALCF Data Analytics and Visualization Resources William (Bill) Allcock Leadership Computing Facility Argonne Leadership Computing Facility Established 2006. Dedicated

More information

LQCD Computing at BNL

LQCD Computing at BNL LQCD Computing at BNL 2012 USQCD All-Hands Meeting Fermilab May 4, 2012 Robert Mawhinney Columbia University Some BNL Computers 8k QCDSP nodes 400 GFlops at CU 1997-2005 (Chulwoo, Pavlos, George, among

More information

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. Cluster Networks Introduction Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems. As usual, the driver is performance

More information

HPC SERVICE PROVISION FOR THE UK

HPC SERVICE PROVISION FOR THE UK HPC SERVICE PROVISION FOR THE UK 5 SEPTEMBER 2016 Dr Alan D Simpson ARCHER CSE Director EPCC Technical Director Overview Tiers of HPC Tier 0 PRACE Tier 1 ARCHER DiRAC Tier 2 EPCC Oxford Cambridge UCL Tiers

More information

Advanced User Support with πcs

Advanced User Support with πcs Advanced User Support with πcs at the LRZ Dieter Kranzlmüller Munich Network Management Team Ludwig Maximilians Universität München (LMU) & Leibniz Supercomputing Centre (LRZ) of the Bavarian Academy of

More information

Operational Robustness of Accelerator Aware MPI

Operational Robustness of Accelerator Aware MPI Operational Robustness of Accelerator Aware MPI Sadaf Alam Swiss National Supercomputing Centre (CSSC) Switzerland 2nd Annual MVAPICH User Group (MUG) Meeting, 2014 Computing Systems @ CSCS http://www.cscs.ch/computers

More information

The Mont-Blanc Project

The Mont-Blanc Project http://www.montblanc-project.eu The Mont-Blanc Project Daniele Tafani Leibniz Supercomputing Centre 1 Ter@tec Forum 26 th June 2013 This project and the research leading to these results has received funding

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) PRACE 16th Call Technical Guidelines for Applicants V1: published on 26/09/17 TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) The contributing sites and the corresponding computer systems

More information

The Hopper System: How the Largest* XE6 in the World Went From Requirements to Reality! Katie Antypas, Tina Butler, and Jonathan Carter

The Hopper System: How the Largest* XE6 in the World Went From Requirements to Reality! Katie Antypas, Tina Butler, and Jonathan Carter The Hopper System: How the Largest* XE6 in the World Went From Requirements to Reality! Katie Antypas, Tina Butler, and Jonathan Carter CUG 2011, May 25th, 2011 1 Requirements to Reality Develop RFP Select

More information

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

PRACE Project Access Technical Guidelines - 19 th Call for Proposals PRACE Project Access Technical Guidelines - 19 th Call for Proposals Peer-Review Office Version 5 06/03/2019 The contributing sites and the corresponding computer systems for this call are: System Architecture

More information

Stockholm Brain Institute Blue Gene/L

Stockholm Brain Institute Blue Gene/L Stockholm Brain Institute Blue Gene/L 1 Stockholm Brain Institute Blue Gene/L 2 IBM Systems & Technology Group and IBM Research IBM Blue Gene /P - An Overview of a Petaflop Capable System Carl G. Tengwall

More information

Barcelona Supercomputing Center

Barcelona Supercomputing Center www.bsc.es Barcelona Supercomputing Center Centro Nacional de Supercomputación EMIT 2016. Barcelona June 2 nd, 2016 Barcelona Supercomputing Center Centro Nacional de Supercomputación BSC-CNS objectives:

More information

Dynamical Exascale Entry Platform

Dynamical Exascale Entry Platform DEEP Dynamical Exascale Entry Platform 2 nd IS-ENES Workshop on High performance computing for climate models 30.01.2013, Toulouse, France Estela Suarez The research leading to these results has received

More information

Operating two InfiniBand grid clusters over 28 km distance

Operating two InfiniBand grid clusters over 28 km distance Operating two InfiniBand grid clusters over 28 km distance Sabine Richling, Steffen Hau, Heinz Kredel, Hans-Günther Kruse IT-Center University of Heidelberg, Germany IT-Center University of Mannheim, Germany

More information

Results from TSUBAME3.0 A 47 AI- PFLOPS System for HPC & AI Convergence

Results from TSUBAME3.0 A 47 AI- PFLOPS System for HPC & AI Convergence Results from TSUBAME3.0 A 47 AI- PFLOPS System for HPC & AI Convergence Jens Domke Research Staff at MATSUOKA Laboratory GSIC, Tokyo Institute of Technology, Japan Omni-Path User Group 2017/11/14 Denver,

More information

SCALASCA parallel performance analyses of SPEC MPI2007 applications

SCALASCA parallel performance analyses of SPEC MPI2007 applications Mitglied der Helmholtz-Gemeinschaft SCALASCA parallel performance analyses of SPEC MPI2007 applications 2008-05-22 Zoltán Szebenyi Jülich Supercomputing Centre, Forschungszentrum Jülich Aachen Institute

More information

Outline. March 5, 2012 CIRMMT - McGill University 2

Outline. March 5, 2012 CIRMMT - McGill University 2 Outline CLUMEQ, Calcul Quebec and Compute Canada Research Support Objectives and Focal Points CLUMEQ Site at McGill ETS Key Specifications and Status CLUMEQ HPC Support Staff at McGill Getting Started

More information

Prototyping in PRACE PRACE Energy to Solution prototype at LRZ

Prototyping in PRACE PRACE Energy to Solution prototype at LRZ Prototyping in PRACE PRACE Energy to Solution prototype at LRZ Torsten Wilde 1IP-WP9 co-lead and 2IP-WP11 lead (GSC-LRZ) PRACE Industy Seminar, Bologna, April 16, 2012 Leibniz Supercomputing Center 2 Outline

More information

First Experience with LCG. Board of Sponsors 3 rd April 2009

First Experience with LCG. Board of Sponsors 3 rd April 2009 First Experience with LCG Operation and the future... CERN openlab Board of Sponsors 3 rd April 2009 Ian Bird LCG Project Leader The LHC Computing Challenge Signal/Noise: 10-9 Data volume High rate * large

More information

How то Use HPC Resources Efficiently by a Message Oriented Framework.

How то Use HPC Resources Efficiently by a Message Oriented Framework. How то Use HPC Resources Efficiently by a Message Oriented Framework www.hp-see.eu E. Atanassov, T. Gurov, A. Karaivanova Institute of Information and Communication Technologies Bulgarian Academy of Science

More information

Trends in HPC Architectures

Trends in HPC Architectures Mitglied der Helmholtz-Gemeinschaft Trends in HPC Architectures Norbert Eicker Institute for Advanced Simulation Jülich Supercomputing Centre PRACE/LinkSCEEM-2 CyI 2011 Winter School Nikosia, Cyprus Forschungszentrum

More information

Interconnection of Armenian e- Infrastructures with the pan- Euroepan Integrated Environments

Interconnection of Armenian e- Infrastructures with the pan- Euroepan Integrated Environments Interconnection of Armenian e- Infrastructures with the pan- Euroepan Integrated Environments H. Astsatryan Institute for Informatics and Automation Problems, National Academy of Sciences of the Republic

More information

Algorithms, System and Data Centre Optimisation for Energy Efficient HPC

Algorithms, System and Data Centre Optimisation for Energy Efficient HPC 2015-09-14 Algorithms, System and Data Centre Optimisation for Energy Efficient HPC Vincent Heuveline URZ Computing Centre of Heidelberg University EMCL Engineering Mathematics and Computing Lab 1 Energy

More information

Peta-Scale Simulations with the HPC Software Framework walberla:

Peta-Scale Simulations with the HPC Software Framework walberla: Peta-Scale Simulations with the HPC Software Framework walberla: Massively Parallel AMR for the Lattice Boltzmann Method SIAM PP 2016, Paris April 15, 2016 Florian Schornbaum, Christian Godenschwager,

More information

Technologies for Information and Health

Technologies for Information and Health Energy Defence and Global Security Technologies for Information and Health Atomic Energy Commission HPC in France from a global perspective Pierre LECA Head of «Simulation and Information Sciences Dpt.»

More information

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill Introduction to FREE National Resources for Scientific Computing Dana Brunson Oklahoma State University High Performance Computing Center Jeff Pummill University of Arkansas High Peformance Computing Center

More information

EU Research Infra Integration: a vision from the BSC. Josep M. Martorell, PhD Associate Director

EU Research Infra Integration: a vision from the BSC. Josep M. Martorell, PhD Associate Director EU Research Infra Integration: a vision from the BSC Josep M. Martorell, PhD Associate Director 11/2017 Ideas on 3 topics: 1. The BSC as a Research Infrastructure 2. The added-value of an European RI for

More information

Proven video conference management software for Cisco Meeting Server

Proven video conference management software for Cisco Meeting Server Proven video conference management software for Cisco Meeting Server VQ Conference Manager (formerly Acano Manager) is your key to dependable, scalable, self-service video conferencing Increase service

More information

MAHA. - Supercomputing System for Bioinformatics

MAHA. - Supercomputing System for Bioinformatics MAHA - Supercomputing System for Bioinformatics - 2013.01.29 Outline 1. MAHA HW 2. MAHA SW 3. MAHA Storage System 2 ETRI HPC R&D Area - Overview Research area Computing HW MAHA System HW - Rpeak : 0.3

More information

Trends in HPC (hardware complexity and software challenges)

Trends in HPC (hardware complexity and software challenges) Trends in HPC (hardware complexity and software challenges) Mike Giles Oxford e-research Centre Mathematical Institute MIT seminar March 13th, 2013 Mike Giles (Oxford) HPC Trends March 13th, 2013 1 / 18

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

BlueGene/L. Computer Science, University of Warwick. Source: IBM

BlueGene/L. Computer Science, University of Warwick. Source: IBM BlueGene/L Source: IBM 1 BlueGene/L networking BlueGene system employs various network types. Central is the torus interconnection network: 3D torus with wrap-around. Each node connects to six neighbours

More information

Performance analysis of Sweep3D on Blue Gene/P with Scalasca

Performance analysis of Sweep3D on Blue Gene/P with Scalasca Mitglied der Helmholtz-Gemeinschaft Performance analysis of Sweep3D on Blue Gene/P with Scalasca 2010-04-23 Brian J. N. Wylie, David Böhme, Bernd Mohr, Zoltán Szebenyi & Felix Wolf Jülich Supercomputing

More information

Parallel I/O and Portable Data Formats I/O strategies

Parallel I/O and Portable Data Formats I/O strategies Parallel I/O and Portable Data Formats I/O strategies Sebastian Lührs s.luehrs@fz-juelich.de Jülich Supercomputing Centre Forschungszentrum Jülich GmbH Jülich, March 13 th, 2017 Outline Common I/O strategies

More information

Some aspect of research and development in ICT in Bulgaria. Authors Kiril Boyanov and Stefan Dodunekov

Some aspect of research and development in ICT in Bulgaria. Authors Kiril Boyanov and Stefan Dodunekov Some aspect of research and development in ICT in Bulgaria Authors Kiril Boyanov and Stefan Dodunekov Introduction The development of economy and research is determined by a number of factors, among which:

More information

The Spider Center-Wide File System

The Spider Center-Wide File System The Spider Center-Wide File System Presented by Feiyi Wang (Ph.D.) Technology Integration Group National Center of Computational Sciences Galen Shipman (Group Lead) Dave Dillow, Sarp Oral, James Simmons,

More information

Green Supercomputing

Green Supercomputing Green Supercomputing On the Energy Consumption of Modern E-Science Prof. Dr. Thomas Ludwig German Climate Computing Centre Hamburg, Germany ludwig@dkrz.de Outline DKRZ 2013 and Climate Science The Exascale

More information

SEVENTH FRAMEWORK PROGRAMME Research Infrastructures

SEVENTH FRAMEWORK PROGRAMME Research Infrastructures SEVENTH FRAMEWORK PROGRAMME Research Infrastructures INFRA-2012-2.3.1 Third Implementation Phase of the European High Performance Computing (HPC) service PRACE PRACE-3IP PRACE Third Implementation Phase

More information

Toward portable I/O performance by leveraging system abstractions of deep memory and interconnect hierarchies

Toward portable I/O performance by leveraging system abstractions of deep memory and interconnect hierarchies Toward portable I/O performance by leveraging system abstractions of deep memory and interconnect hierarchies François Tessier, Venkatram Vishwanath, Paul Gressier Argonne National Laboratory, USA Wednesday

More information

Experience with the Jülich HPS Regatta H+ Cluster

Experience with the Jülich HPS Regatta H+ Cluster Experience with the Jülich HPS Regatta H+ Cluster K.Wolkersdorfer@fz-juelich.de Jump Architecture IBM RegattaH + SM P Cluster 2 Login Nodes 39 Compute Nodes 2 TSM Partitions Network + HPS (2 Adapter/4links

More information

Preparing GPU-Accelerated Applications for the Summit Supercomputer

Preparing GPU-Accelerated Applications for the Summit Supercomputer Preparing GPU-Accelerated Applications for the Summit Supercomputer Fernanda Foertter HPC User Assistance Group Training Lead foertterfs@ornl.gov This research used resources of the Oak Ridge Leadership

More information

Lustre usages and experiences

Lustre usages and experiences Lustre usages and experiences at German Climate Computing Centre in Hamburg Carsten Beyer High Performance Computing Center Exclusively for the German Climate Research Limited Company, non-profit Staff:

More information

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008

The LHC Computing Grid. Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid Slides mostly by: Dr Ian Bird LCG Project Leader 18 March 2008 The LHC Computing Grid February 2008 Some precursors Computing for HEP means data handling Fixed-target experiments

More information

CRAY T3E at the Research Centre Juelich - Delivering GigaFlops Around the Clock

CRAY T3E at the Research Centre Juelich - Delivering GigaFlops Around the Clock CRAY T3E at the Research Centre Juelich - Delivering GigaFlops Around the Clock Jutta Docter Zentralinstitut fuer Angewandte Mathematik Forschungszentrum Juelich GmbH, Juelich, Germany ABSTRACT: Scientists

More information

Prototypes Systems for PRACE. François Robin, GENCI, WP7 leader

Prototypes Systems for PRACE. François Robin, GENCI, WP7 leader Prototypes Systems for PRACE François Robin, GENCI, WP7 leader Outline Motivation Summary of the selection process Description of the set of prototypes selected by the Management Board Conclusions 2 Outline

More information

<Insert Picture Here> Tape Technologies April 4, 2011

<Insert Picture Here> Tape Technologies April 4, 2011 Tape Technologies April 4, 2011 Gary Francis Sr. Director, Storage Welcome to PASIG 2010 Oracle and/or its affiliates. All rights reserved. Oracle confidential 2 Perhaps you have

More information

Mitglied der Helmholtz-Gemeinschaft. System Monitoring: LLview

Mitglied der Helmholtz-Gemeinschaft. System Monitoring: LLview Mitglied der Helmholtz-Gemeinschaft System Monitoring: LLview November 27, 2015 Carsten Karbach and Julia Valder Content 1 Overview 2 Components 3 Customization November 27, 2015 Carsten Karbach and Julia

More information

Update on Cray Activities in the Earth Sciences

Update on Cray Activities in the Earth Sciences Update on Cray Activities in the Earth Sciences Presented to the 13 th ECMWF Workshop on the Use of HPC in Meteorology 3-7 November 2008 Per Nyberg nyberg@cray.com Director, Marketing and Business Development

More information

[Scalasca] Tool Integrations

[Scalasca] Tool Integrations Mitglied der Helmholtz-Gemeinschaft [Scalasca] Tool Integrations Aug 2011 Bernd Mohr CScADS Performance Tools Workshop Lake Tahoe Contents Current integration of various direct measurement tools Paraver

More information

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems

More information

LUG 2012 From Lustre 2.1 to Lustre HSM IFERC (Rokkasho, Japan)

LUG 2012 From Lustre 2.1 to Lustre HSM IFERC (Rokkasho, Japan) LUG 2012 From Lustre 2.1 to Lustre HSM Lustre @ IFERC (Rokkasho, Japan) Diego.Moreno@bull.net From Lustre-2.1 to Lustre-HSM - Outline About Bull HELIOS @ IFERC (Rokkasho, Japan) Lustre-HSM - Basis of Lustre-HSM

More information

LRZ SuperMUC One year of Operation

LRZ SuperMUC One year of Operation LRZ SuperMUC One year of Operation IBM Deep Computing 13.03.2013 Klaus Gottschalk IBM HPC Architect Leibniz Computing Center s new HPC System is now installed and operational 2 SuperMUC Technical Highlights

More information

Building Self-Healing Mass Storage Arrays. for Large Cluster Systems

Building Self-Healing Mass Storage Arrays. for Large Cluster Systems Building Self-Healing Mass Storage Arrays for Large Cluster Systems NSC08, Linköping, 14. October 2008 Toine Beckers tbeckers@datadirectnet.com Agenda Company Overview Balanced I/O Systems MTBF and Availability

More information

CAS 2K13 Sept Jean-Pierre Panziera Chief Technology Director

CAS 2K13 Sept Jean-Pierre Panziera Chief Technology Director CAS 2K13 Sept. 2013 Jean-Pierre Panziera Chief Technology Director 1 personal note 2 Complete solutions for Extreme Computing b ubullx ssupercomputer u p e r c o p u t e r suite s u e Production ready

More information

Practical Scientific Computing

Practical Scientific Computing Practical Scientific Computing Performance-optimized Programming Preliminary discussion: July 11, 2008 Dr. Ralf-Peter Mundani, mundani@tum.de Dipl.-Ing. Ioan Lucian Muntean, muntean@in.tum.de MSc. Csaba

More information

Brand-New Vector Supercomputer

Brand-New Vector Supercomputer Brand-New Vector Supercomputer NEC Corporation IT Platform Division Shintaro MOMOSE SC13 1 New Product NEC Released A Brand-New Vector Supercomputer, SX-ACE Just Now. Vector Supercomputer for Memory Bandwidth

More information

The LHC Computing Grid

The LHC Computing Grid The LHC Computing Grid Visit of Finnish IT Centre for Science CSC Board Members Finland Tuesday 19 th May 2009 Frédéric Hemmer IT Department Head The LHC and Detectors Outline Computing Challenges Current

More information