ASKAP Central Processor: Design and Implementa8on

Size: px
Start display at page:

Download "ASKAP Central Processor: Design and Implementa8on"

Transcription

1 ASKAP Central Processor: Design and Implementa8on Calibra8on and Imaging Workshop 2014 Ben Humphreys ASKAP So(ware and Compu3ng Project Engineer 3rd - 7th March 2014 ASTRONOMY AND SPACE SCIENCE

2 Australian SKA Pathfinder (ASKAP) Sited at the Murchison Radio Observatory, Western Australia Observes between 0.7 and 1.8 GHz 36 antennas, 12m diameter Started construc3on July 2006 Data rate from correlator ~2.5GB/s A DVD every two seconds! Science processing requirement 200TF/s for basic capabili3es 800+TF/s for high angular resolu3on spectral line imaging

3 28 Gbit/s Correlator Card 1 Card 2 Card 3 Card 4 Card 5 Card 6 Card 7 Card 8 Card 9 Card 10 Card 11 Card 12 Card 13 Card 14 Card 15 Card Ethernet Switch Correlator Control Computer 1 Correlator Control Computer 2 Correlator Control Computer 3 Correlator Control Computer 4 Ethernet Switch ~20 Gbit/s DWDM Terminal To Perth Card 96

4 The Pawsey High Performance Compu8ng Centre for SKA Science AUD$80M super- compu3ng centre Supports storage and processing of data from the Australian SKA Pathfinder and the Murchison Widefield Array Construc3on completed April 2013

5 ASKAP Central Processor 472 x Cray XC30 Compute Nodes 200 TFlop/s Peak Cray Aries (Dragonfly topology) Cray Sonexion Lustre Storage 1.4 PB usable 480 x 4TB Disk Drives, RAID 6 + Hot Spares Approximately 30 GByte/s I/O performance

6 Cray XC30 Compute Nodes 472 x Cray XC30 Compute Nodes 2 x 3.0 GHz Intel Xeon E v2 (Ivy Bridge) CPUs 10 Cores per CPU (20 per node) 64 GB DDR3-1866Mhz RAM Image Credit: Cray

7 ASKAP Central Processor 16 x Ingest Nodes 2 x 2.0 GHz Intel Xeon E (Sandy Bridge) CPUs + 64GB RAM 10 GbE connec3vity to MRO 4x FDR Infiniband connec3vity to compute nodes and Lustre filesystem 2 x Login Nodes 2 x Data Mover Nodes (dedicated to external data transfers) 1.4 PB (usable) Cray Sonexion Lustre storage

8 I/O & Network Hall

9 Tape Hall

10 From MRO DWDM Terminal 2 x 56 Gbit/s IB per node 2 x MMUs Cray Sonexion Storage 6 x SSUs 2 x 10 GbE per node Ingest Nodes Node 1 Cray XC30 Node 2 Ethernet Switch Node 3 Node 4 Node 5 Node 6 Node 7 Node 8 Node 9 Node 10 Node 11 Node 12 Node 13 Node 14 FDR Infiniband Switch Router Node Router Node Router Node Router Node Node 15 Node 16 (472 compute nodes)

11 Ingest Pipeline Ingest Pipeline Telescope Observation Manager Correlator Telescope metadata Visibilities Merge Metadata & Visibilities Flag (From RFI database) Flag (On the fly detection) Apply Calibration Channel Averaging (16416 to 304) Downstream Pipelines Downstream Pipelines Access database of known RFI sources Obtain latest calibration solution Channel Averaging (304 to ~30) Downstream Pipelines Services RFI Source Service Calibration Data Service

12 Calibra8on and Imaging Pipelines Calibration Pipeline Services Channels (18.5kHz) ccalibrator Calibration Solution Calibration Data Service Large-N (eg. Spectral Line) Imager Pipeline UV Data Channels (18.5kHz) Imager (cimager) Image Cube Source Finder/ Identifier Source Catalog Ingest Pipeline Small-N (e.g. Continuum) Imager Pipeline Sky Model Service UV Data 304 Channels (1MHz) Imager (cimager) Images Source Finder/ Identifier Source Catalog Transient Detector Pipeline ~30 Channels (10MHz) Transient Imager (cfimager) Images Transient Finder/Identifier Transient Detections Light Curve Service

13 Data Services Sky Model Service Provides access to the Global Sky Model (GSM), an all- sky database with flux measurements in an appropriate frequency range to ASKAP RFI Source Service Responsible for managing and providing access to a database of known RFI sources that may impact ASKAP observa3ons Calibra8on Data Service Provides an interface to a database containing calibra3on parameters

14 Challenges and Lessons Learned

15 Challenges and Lessons Learned Per process memory footprint Disk I/O Fault Tolerance & Error Handling

16 Thank you CSIRO Astronomy and Space Science Ben Humphreys ASKAP Compu3ng Project Engineer t e ben.humphreys@csiro.au w ASTRONOMY AND SPACE SCIENCE

17 Per process memory footprint Approximately 7GB per spectral line imaging process Approximately 16GB per con3nuum imaging process (40 GB with taylor terms 0, 1, 2) Our target was 4GB per core, actual system 3.2GB per core!! Currently run MPI everywhere no mul3- threading 1 core == 1 process Lots of shared- memory parallelism could be exploited Gridding/degridding, genera3on of convolu3on func3ons, FFT

18 Disk I/O Many inhibitors to stable performance Load- balancing (OSS selec3on) Raid- Check (Sunday night slow- down) Lustre filesystem capable of achieving 30GB/s write Yet we typically see 2GB/s wri3ng measurement sets

19 Fault Tolerance Its hard to retrofit fault tolerance! Possible to get Checkpoint/Restart easily E.g. BLCR Time to checkpoint 30TB RAM at 15 GB/s is ~30 mins Beuer models Checkpoint at end of minor cycle Only checkpoint meta- data (e.g. which spectral channels have been wriuen to disk and which are yet to be processed)

ASKAP Data Flow ASKAP & MWA Archives Meeting

ASKAP Data Flow ASKAP & MWA Archives Meeting ASKAP Data Flow ASKAP & MWA Archives Meeting Ben Humphreys ASKAP Software and Computing Project Engineer 25 th March 2013 ASTRONOMY AND SPACE SCIENCE ASKAP @ Murchison Radioastronomy Observatory Australian

More information

Computational issues for HI

Computational issues for HI Computational issues for HI Tim Cornwell, Square Kilometre Array How SKA processes data Science Data Processing system is part of the telescope Only one system per telescope Data flow so large that dedicated

More information

ASKAP Pipeline processing and simulations. Dr Matthew Whiting ASKAP Computing, CSIRO May 5th, 2010

ASKAP Pipeline processing and simulations. Dr Matthew Whiting ASKAP Computing, CSIRO May 5th, 2010 ASKAP Pipeline processing and simulations Dr Matthew Whiting ASKAP Computing, CSIRO May 5th, 2010 ASKAP Computing Team Members Team members Marsfield: Tim Cornwell, Ben Humphreys, Juan Carlos Guzman, Malte

More information

CSIRO ASKAP Science Data Archive CSIRO ASTRONOMY AND SPACE SCIENCE (CASS)

CSIRO ASKAP Science Data Archive CSIRO ASTRONOMY AND SPACE SCIENCE (CASS) CSIRO ASKAP Science Data Archive CSIRO ASTRONOMY AND SPACE SCIENCE (CASS) Jessica Chapman, ATUC Meeting, 5 December 2013 CSIRO ASKAP Science Data Archive (CASDA) Talk outline A: CASDA overview and timeline

More information

Adaptive selfcalibration for Allen Telescope Array imaging

Adaptive selfcalibration for Allen Telescope Array imaging Adaptive selfcalibration for Allen Telescope Array imaging Garrett Keating, William C. Barott & Melvyn Wright Radio Astronomy laboratory, University of California, Berkeley, CA, 94720 ABSTRACT Planned

More information

SDP Design for Cloudy Regions

SDP Design for Cloudy Regions SDP Design for Cloudy Regions Markus Dolensky, 11/02/2016 2 ICRAR s Data Intensive Astronomy Group M.B. I.C. R.D. M.D. K.V. C.W. A.W. D.P. R.T. generously borrowed content from above colleagues 3 SDP Subelements

More information

Signal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA

Signal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA Signal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA Randall Wayth ICRAR/Curtin University with Marcin Sokolowski, Cathryn Trott Outline "Holy grail of CASPER system is

More information

User Training Cray XC40 IITM, Pune

User Training Cray XC40 IITM, Pune User Training Cray XC40 IITM, Pune Sudhakar Yerneni, Raviteja K, Nachiket Manapragada, etc. 1 Cray XC40 Architecture & Packaging 3 Cray XC Series Building Blocks XC40 System Compute Blade 4 Compute Nodes

More information

Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy

Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy François Tessier, Venkatram Vishwanath Argonne National Laboratory, USA July 19,

More information

A ClusterStor update. Torben Kling Petersen, PhD. Principal Architect, HPC

A ClusterStor update. Torben Kling Petersen, PhD. Principal Architect, HPC A ClusterStor update Torben Kling Petersen, PhD Principal Architect, HPC Sonexion (ClusterStor) STILL the fastest file system on the planet!!!! Total system throughput in excess on 1.1 TB/s!! 2 Software

More information

HECToR to ARCHER. An Introduction from Cray. 10/3/2013 Cray Inc. Property

HECToR to ARCHER. An Introduction from Cray. 10/3/2013 Cray Inc. Property HECToR to ARCHER An Introduction from Cray 10/3/2013 Cray Inc. Property 1 HECToR High-End Computing Terascale Resource HECToR has been the UK s front-line national supercomputing service since 2007. The

More information

HPC NETWORKING IN THE REAL WORLD

HPC NETWORKING IN THE REAL WORLD 15 th ANNUAL WORKSHOP 2019 HPC NETWORKING IN THE REAL WORLD Jesse Martinez Los Alamos National Laboratory March 19 th, 2019 [ LOGO HERE ] LA-UR-19-22146 ABSTRACT Introduction to LANL High Speed Networking

More information

Toward portable I/O performance by leveraging system abstractions of deep memory and interconnect hierarchies

Toward portable I/O performance by leveraging system abstractions of deep memory and interconnect hierarchies Toward portable I/O performance by leveraging system abstractions of deep memory and interconnect hierarchies François Tessier, Venkatram Vishwanath, Paul Gressier Argonne National Laboratory, USA Wednesday

More information

VLBI progress Down-under. Tasso Tzioumis Australia Telescope National Facility (ATNF) 25 September 2008

VLBI progress Down-under. Tasso Tzioumis Australia Telescope National Facility (ATNF) 25 September 2008 VLBI progress Down-under Tasso Tzioumis Australia Telescope National Facility (ATNF) 25 September 2008 Outline Down-under == Southern hemisphere VLBI in Australia (LBA) Progress in the last few years Disks

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

SKA Regional Centre Activities in Australasia

SKA Regional Centre Activities in Australasia SKA Regional Centre Activities in Australasia Dr Slava Kitaeff CSIRO-ICRAR APSRC Project Engineer ERIDANUS National Project Lead Why SKA Regional Centres? SKA 1 Observatory Compute capacity: 100 Pflops

More information

Xyratex ClusterStor6000 & OneStor

Xyratex ClusterStor6000 & OneStor Xyratex ClusterStor6000 & OneStor Proseminar Ein-/Ausgabe Stand der Wissenschaft von Tim Reimer Structure OneStor OneStorSP OneStorAP ''Green'' Advancements ClusterStor6000 About Scale-Out Storage Architecture

More information

1. ALMA Pipeline Cluster specification. 2. Compute processing node specification: $26K

1. ALMA Pipeline Cluster specification. 2. Compute processing node specification: $26K 1. ALMA Pipeline Cluster specification The following document describes the recommended hardware for the Chilean based cluster for the ALMA pipeline and local post processing to support early science and

More information

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE

More information

Welcome! Virtual tutorial starts at 15:00 BST

Welcome! Virtual tutorial starts at 15:00 BST Welcome! Virtual tutorial starts at 15:00 BST Parallel IO and the ARCHER Filesystem ARCHER Virtual Tutorial, Wed 8 th Oct 2014 David Henty Reusing this material This work is licensed

More information

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.

More information

HPC Architectures. Types of resource currently in use

HPC Architectures. Types of resource currently in use HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

Analysis of the Parallelisation of the Duchamp Algorithm

Analysis of the Parallelisation of the Duchamp Algorithm ivec Research Internships (2009-2010) Analysis of the Parallelisation of the Duchamp Algorithm Stefan Westerlund University of Western Australia Abstract A critical step in radio astronomy is to search

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

THE SQUARE KILOMETER ARRAY (SKA) ESD USE CASE

THE SQUARE KILOMETER ARRAY (SKA) ESD USE CASE THE SQUARE KILOMETER ARRAY (SKA) ESD USE CASE Ronald Nijboer Head ASTRON R&D Computing Group With material from Chris Broekema (ASTRON) John Romein (ASTRON) Nick Rees (SKA Office) Miles Deegan (SKA Office)

More information

LBRN - HPC systems : CCT, LSU

LBRN - HPC systems : CCT, LSU LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 14 th CALL (T ier0) Contributing sites and the corresponding computer systems for this call are: GENCI CEA, France Bull Bullx cluster GCS HLRS, Germany Cray

More information

DVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011

DVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011 DVS, GPFS and External Lustre at NERSC How It s Working on Hopper Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011 1 NERSC is the Primary Computing Center for DOE Office of Science NERSC serves

More information

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory

Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Quinn Mitchell HPC UNIX/LINUX Storage Systems ORNL is managed by UT-Battelle for the US Department of Energy U.S. Department

More information

John W. Romein. Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands

John W. Romein. Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands Signal Processing on GPUs for Radio Telescopes John W. Romein Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands 1 Overview radio telescopes six radio telescope algorithms on

More information

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber NERSC Site Update National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory Richard Gerber NERSC Senior Science Advisor High Performance Computing Department Head Cori

More information

Comet Virtualization Code & Design Sprint

Comet Virtualization Code & Design Sprint Comet Virtualization Code & Design Sprint SDSC September 23-24 Rick Wagner San Diego Supercomputer Center Meeting Goals Build personal connections between the IU and SDSC members of the Comet team working

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) PRACE 16th Call Technical Guidelines for Applicants V1: published on 26/09/17 TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) The contributing sites and the corresponding computer systems

More information

Long Baseline Array Status

Long Baseline Array Status Long Baseline Array Status Cormac Reynolds, Chris Phillips + LBA Team 19 November 2015 CSIRO ASTRONOMY & SPACE SCIENCE LBA LBA VLBI array operated by CSIRO University of Tasmania, Auckland University of

More information

Certification Document Rackserver Open-E Unified Storage 15 TB O2212iR 06/04/2014. Rackserver Open-E Unified Storage 15 TB O2212iR system

Certification Document Rackserver Open-E Unified Storage 15 TB O2212iR 06/04/2014. Rackserver Open-E Unified Storage 15 TB O2212iR system Rackserver Open-E Unified Storage 15 TB O2212iR system 1 Executive summary After performing all tests, the Rackserver Open-E Unified Storage 15 TB O2212iR has been officially certified according to the

More information

PARALLEL PROGRAMMING MANY-CORE COMPUTING: THE LOFAR SOFTWARE TELESCOPE (5/5)

PARALLEL PROGRAMMING MANY-CORE COMPUTING: THE LOFAR SOFTWARE TELESCOPE (5/5) PARALLEL PROGRAMMING MANY-CORE COMPUTING: THE LOFAR SOFTWARE TELESCOPE (5/5) Rob van Nieuwpoort Vrije Universiteit Amsterdam & Astron, the Netherlands Institute for Radio Astronomy Why Radio? Credit: NASA/IPAC

More information

Seagate ExaScale HPC Storage

Seagate ExaScale HPC Storage Seagate ExaScale HPC Storage Miro Lehocky System Engineer, Seagate Systems Group, HPC1 100+ PB Lustre File System 130+ GB/s Lustre File System 140+ GB/s Lustre File System 55 PB Lustre File System 1.6

More information

Illinois Proposal Considerations Greg Bauer

Illinois Proposal Considerations Greg Bauer - 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and

More information

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems

More information

Certification Document Supermicro SSG-6027R-E1R12T 04/30/2014. Supermicro SSG-6027R-E1R12T system

Certification Document Supermicro SSG-6027R-E1R12T 04/30/2014. Supermicro SSG-6027R-E1R12T system Supermicro SSG-6027R-E1R12T system Executive summary After performing all tests, the Supermicro SSG-6027R-E1R12T has been officially certified according to the Open-E Hardware Certification Program Guide

More information

GPU Computing with Fornax. Dr. Christopher Harris

GPU Computing with Fornax. Dr. Christopher Harris GPU Computing with Fornax Dr. Christopher Harris ivec@uwa CAASTRO GPU Training Workshop 8-9 October 2012 Introducing the Historical GPU Graphics Processing Unit (GPU) n : A specialised electronic circuit

More information

CorrelX: A Cloud-Based VLBI Correlator

CorrelX: A Cloud-Based VLBI Correlator CorrelX: A Cloud-Based VLBI Correlator V. Pankratius, A. J. Vazquez, P. Elosegui Massachusetts Institute of Technology Haystack Observatory pankrat@mit.edu, victorpankratius.com 5 th International VLBI

More information

The Programmable Telescope

The Programmable Telescope The Programmable Telescope Tim Cornwell ASKAP Computing Lead New generation of radio telescopes Flourishing around the world Atacama Large Millimetre Array (ALMA) Expanded Very Large Array (EVLA) Square

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties Netherlands Institute for Radio Astronomy Update LOFAR Long Term Archive May 18th, 2009 Hanno Holties LOFAR Long Term Archive (LTA) Update Status Architecture Data Management Integration LOFAR, Target,

More information

Path to Exascale? Intel in Research and HPC 2012

Path to Exascale? Intel in Research and HPC 2012 Path to Exascale? Intel in Research and HPC 2012 Intel s Investment in Manufacturing New Capacity for 14nm and Beyond D1X Oregon Development Fab Fab 42 Arizona High Volume Fab 22nm Fab Upgrades D1D Oregon

More information

SKA Computing and Software

SKA Computing and Software SKA Computing and Software Nick Rees 18 May 2016 Summary Introduc)on System overview Compu)ng Elements of the SKA Telescope Manager Low Frequency Aperture Array Central Signal Processor Science Data Processor

More information

Fast Holographic Deconvolution

Fast Holographic Deconvolution Precision image-domain deconvolution for radio astronomy Ian Sullivan University of Washington 4/19/2013 Precision imaging Modern imaging algorithms grid visibility data using sophisticated beam models

More information

Data Processing for the Square Kilometre Array Telescope

Data Processing for the Square Kilometre Array Telescope Data Processing for the Square Kilometre Array Telescope Streaming Workshop Indianapolis th October Bojan Nikolic Astrophysics Group, Cavendish Lab University of Cambridge Email: b.nikolic@mrao.cam.ac.uk

More information

HMEM and Lemaitre2: First bricks of the CÉCI s infrastructure

HMEM and Lemaitre2: First bricks of the CÉCI s infrastructure HMEM and Lemaitre2: First bricks of the CÉCI s infrastructure - CÉCI: What we want - Cluster HMEM - Cluster Lemaitre2 - Comparison - What next? - Support and training - Conclusions CÉCI: What we want CÉCI:

More information

Feedback on BeeGFS. A Parallel File System for High Performance Computing

Feedback on BeeGFS. A Parallel File System for High Performance Computing Feedback on BeeGFS A Parallel File System for High Performance Computing Philippe Dos Santos et Georges Raseev FR 2764 Fédération de Recherche LUmière MATière December 13 2016 LOGO CNRS LOGO IO December

More information

EPICS in the Australian SKA Pathfinder

EPICS in the Australian SKA Pathfinder EPICS in the Australian SKA Pathfinder Craig Haskins Software Engineer 25 April 2012 ASTRONOMY AND SPACE SCIENCE ASKAP Site Murchison Radio Observatory (MRO): Australia s SKA Candidate site Traditional

More information

Developing A Universal Radio Astronomy Backend. Dr. Ewan Barr, MPIfR Backend Development Group

Developing A Universal Radio Astronomy Backend. Dr. Ewan Barr, MPIfR Backend Development Group Developing A Universal Radio Astronomy Backend Dr. Ewan Barr, MPIfR Backend Development Group Overview Why is it needed? What should it do? Key concepts and technologies Case studies: MeerKAT FBF and APSUSE

More information

Certification Document macle GmbH IBM System xx3650 M4 03/06/2014. macle GmbH IBM System x3650 M4

Certification Document macle GmbH IBM System xx3650 M4 03/06/2014. macle GmbH IBM System x3650 M4 macle GmbH IBM System x3650 M4 1 Executive summary After performing all tests, the Certification Document macle GmbH IBM System x3650 M4 system has been officially certified according to the Open-E Hardware

More information

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research Computer Science Section Computational and Information Systems Laboratory National Center for Atmospheric Research My work in the context of TDD/CSS/ReSET Polynya new research computing environment Polynya

More information

Toward Understanding Life-Long Performance of a Sonexion File System

Toward Understanding Life-Long Performance of a Sonexion File System Toward Understanding Life-Long Performance of a Sonexion File System CUG 2015 Mark Swan, Doug Petesch, Cray Inc. dpetesch@cray.com Safe Harbor Statement This presentation may contain forward-looking statements

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

The Breakthrough LISTEN Search for Intelligent Life: A Wideband Data Recorder for the Robert C. Byrd Green Bank Telescope

The Breakthrough LISTEN Search for Intelligent Life: A Wideband Data Recorder for the Robert C. Byrd Green Bank Telescope The Breakthrough LISTEN Search for Intelligent Life: A Wideband Data Recorder for the Robert C. Byrd Green Bank Telescope Dave MacMahon University of California at Berkeley Breakthrough LISTEN SETI Project

More information

Blue Waters System Overview. Greg Bauer

Blue Waters System Overview. Greg Bauer Blue Waters System Overview Greg Bauer The Blue Waters EcoSystem Petascale EducaIon, Industry and Outreach Petascale ApplicaIons (CompuIng Resource AllocaIons) Petascale ApplicaIon CollaboraIon Team Support

More information

Lessons learned from Lustre file system operation

Lessons learned from Lustre file system operation Lessons learned from Lustre file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association

More information

ANSYS Fluent 14 Performance Benchmark and Profiling. October 2012

ANSYS Fluent 14 Performance Benchmark and Profiling. October 2012 ANSYS Fluent 14 Performance Benchmark and Profiling October 2012 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information

More information

Parallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines

Parallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines Parallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines Götz Waschk Technical Seminar, Zeuthen April 27, 2010 > Introduction > Hardware Infiniband

More information

Experiences with HP SFS / Lustre in HPC Production

Experiences with HP SFS / Lustre in HPC Production Experiences with HP SFS / Lustre in HPC Production Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Outline» What is HP StorageWorks Scalable File Share (HP SFS)? A Lustre

More information

NAMD Performance Benchmark and Profiling. January 2015

NAMD Performance Benchmark and Profiling. January 2015 NAMD Performance Benchmark and Profiling January 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support Data Management Dr David Henty HPC Training and Support d.henty@epcc.ed.ac.uk +44 131 650 5960 Overview Lecture will cover Why is IO difficult Why is parallel IO even worse Lustre GPFS Performance on ARCHER

More information

Overcoming Obstacles to Petabyte Archives

Overcoming Obstacles to Petabyte Archives Overcoming Obstacles to Petabyte Archives Mike Holland Grau Data Storage, Inc. 609 S. Taylor Ave., Unit E, Louisville CO 80027-3091 Phone: +1-303-664-0060 FAX: +1-303-664-1680 E-mail: Mike@GrauData.com

More information

Blue Waters I/O Performance

Blue Waters I/O Performance Blue Waters I/O Performance Mark Swan Performance Group Cray Inc. Saint Paul, Minnesota, USA mswan@cray.com Doug Petesch Performance Group Cray Inc. Saint Paul, Minnesota, USA dpetesch@cray.com Abstract

More information

Certification Document GRAFENTHAL GmbH R2208 S2 01/04/2016. GRAFENTHAL GmbH R2208 S2 Storage system

Certification Document GRAFENTHAL GmbH R2208 S2 01/04/2016. GRAFENTHAL GmbH R2208 S2 Storage system GRAFENTHAL GmbH R2208 S2 Storage system Executive summary After performing all tests, the GRAFENTHAL GmbH R2208 S2 has been officially certified according to the Open-E Hardware Certification Program Guide

More information

Leibniz Supercomputer Centre. Movie on YouTube

Leibniz Supercomputer Centre. Movie on YouTube SuperMUC @ Leibniz Supercomputer Centre Movie on YouTube Peak Performance Peak performance: 3 Peta Flops 3*10 15 Flops Mega 10 6 million Giga 10 9 billion Tera 10 12 trillion Peta 10 15 quadrillion Exa

More information

Cori (2016) and Beyond Ensuring NERSC Users Stay Productive

Cori (2016) and Beyond Ensuring NERSC Users Stay Productive Cori (2016) and Beyond Ensuring NERSC Users Stay Productive Nicholas J. Wright! Advanced Technologies Group Lead! Heterogeneous Mul-- Core 4 Workshop 17 September 2014-1 - NERSC Systems Today Edison: 2.39PF,

More information

Architecting a High Performance Storage System

Architecting a High Performance Storage System WHITE PAPER Intel Enterprise Edition for Lustre* Software High Performance Data Division Architecting a High Performance Storage System January 2014 Contents Introduction... 1 A Systematic Approach to

More information

GPU on OpenStack for Science

GPU on OpenStack for Science GPU on OpenStack for Science Deployment and Performance Considerations Luca Cervigni Jeremy Phillips luca.cervigni@pawsey.org.au jeremy.phillips@pawsey.org.au Pawsey Supercomputing Centre Based in Perth,

More information

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC ARCHER/RDF Overview How do they fit together? Andy Turner, EPCC a.turner@epcc.ed.ac.uk www.epcc.ed.ac.uk www.archer.ac.uk Outline ARCHER/RDF Layout Available file systems Compute resources ARCHER Compute

More information

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

PRACE Project Access Technical Guidelines - 19 th Call for Proposals PRACE Project Access Technical Guidelines - 19 th Call for Proposals Peer-Review Office Version 5 06/03/2019 The contributing sites and the corresponding computer systems for this call are: System Architecture

More information

Certification Document HP Proliant DL380 G7 12/22/2011. HP Proliant DL380 G7 storage system

Certification Document HP Proliant DL380 G7 12/22/2011. HP Proliant DL380 G7 storage system HP Proliant DL380 G7 storage system 1 Executive summary After performing all tests the HP Proliant DL380 G7 system has been officially certified according to the Open-E Hardware Certification Program.

More information

Lessons learnt from implementing mosaicing and faceting in ASKAPsoft. Max Voronkov & Tim Cornwell ASKAP team 2nd April 2009

Lessons learnt from implementing mosaicing and faceting in ASKAPsoft. Max Voronkov & Tim Cornwell ASKAP team 2nd April 2009 Lessons learnt from implementing mosaicing and faceting in ASKAPsoft Max Voronkov & Tim Cornwell ASKAP team 2nd April 2009 Outline - Imaging software ASKAPsoft re-uses LOFAR design Imaging is treated as

More information

libhio: Optimizing IO on Cray XC Systems With DataWarp

libhio: Optimizing IO on Cray XC Systems With DataWarp libhio: Optimizing IO on Cray XC Systems With DataWarp May 9, 2017 Nathan Hjelm Cray Users Group May 9, 2017 Los Alamos National Laboratory LA-UR-17-23841 5/8/2017 1 Outline Background HIO Design Functionality

More information

The Cray CX1 puts massive power and flexibility right where you need it in your workgroup

The Cray CX1 puts massive power and flexibility right where you need it in your workgroup The Cray CX1 puts massive power and flexibility right where you need it in your workgroup Up to 96 cores of Intel 5600 compute power 3D visualization Up to 32TB of storage GPU acceleration Small footprint

More information

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Evaluation of Lustre File System software enhancements for improved Metadata performance Wojciech Turek, Paul Calleja,John

More information

DATARMOR: Comment s'y préparer? Tina Odaka

DATARMOR: Comment s'y préparer? Tina Odaka DATARMOR: Comment s'y préparer? Tina Odaka 30.09.2016 PLAN DATARMOR: Detailed explanation on hard ware What can you do today to be ready for DATARMOR DATARMOR : convention de nommage ClusterHPC REF SCRATCH

More information

Overview of Tianhe-2

Overview of Tianhe-2 Overview of Tianhe-2 (MilkyWay-2) Supercomputer Yutong Lu School of Computer Science, National University of Defense Technology; State Key Laboratory of High Performance Computing, China ytlu@nudt.edu.cn

More information

Improving Packet Processing Performance of a Memory- Bounded Application

Improving Packet Processing Performance of a Memory- Bounded Application Improving Packet Processing Performance of a Memory- Bounded Application Jörn Schumacher CERN / University of Paderborn, Germany jorn.schumacher@cern.ch On behalf of the ATLAS FELIX Developer Team LHCb

More information

LCE: Lustre at CEA. Stéphane Thiell CEA/DAM

LCE: Lustre at CEA. Stéphane Thiell CEA/DAM LCE: Lustre at CEA Stéphane Thiell CEA/DAM (stephane.thiell@cea.fr) 1 Lustre at CEA: Outline Lustre at CEA updates (2009) Open Computing Center (CCRT) updates CARRIOCAS (Lustre over WAN) project 2009-2010

More information

CPMD Performance Benchmark and Profiling. February 2014

CPMD Performance Benchmark and Profiling. February 2014 CPMD Performance Benchmark and Profiling February 2014 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the supporting

More information

CMS experience with the deployment of Lustre

CMS experience with the deployment of Lustre CMS experience with the deployment of Lustre Lavinia Darlea, on behalf of CMS DAQ Group MIT/DAQ CMS April 12, 2016 1 / 22 Role and requirements CMS DAQ2 System Storage Manager and Transfer System (SMTS)

More information

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato *1, Shuichi Ihara *2, Satoshi Matsuoka *1 *1 Tokyo Institute

More information

MILC Performance Benchmark and Profiling. April 2013

MILC Performance Benchmark and Profiling. April 2013 MILC Performance Benchmark and Profiling April 2013 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the supporting

More information

HPC Hardware Overview

HPC Hardware Overview HPC Hardware Overview John Lockman III April 19, 2013 Texas Advanced Computing Center The University of Texas at Austin Outline Lonestar Dell blade-based system InfiniBand ( QDR) Intel Processors Longhorn

More information

Isilon Performance. Name

Isilon Performance. Name 1 Isilon Performance Name 2 Agenda Architecture Overview Next Generation Hardware Performance Caching Performance Streaming Reads Performance Tuning OneFS Architecture Overview Copyright 2014 EMC Corporation.

More information

Open-E High Availability Certification report for Intel Server System R2224GZ4GC4

Open-E High Availability Certification report for Intel Server System R2224GZ4GC4 Open-E High Availability Certification report for Intel Server System R2224GZ4GC4 1 Executive summary After successfully passing all the required tests, the Intel Server System R2224GZ4GC4 is now officially

More information

PARALLEL PROGRAMMING MANY-CORE COMPUTING FOR THE LOFAR TELESCOPE ROB VAN NIEUWPOORT. Rob van Nieuwpoort

PARALLEL PROGRAMMING MANY-CORE COMPUTING FOR THE LOFAR TELESCOPE ROB VAN NIEUWPOORT. Rob van Nieuwpoort PARALLEL PROGRAMMING MANY-CORE COMPUTING FOR THE LOFAR TELESCOPE ROB VAN NIEUWPOORT Rob van Nieuwpoort rob@cs.vu.nl Who am I 10 years of Grid / Cloud computing 6 years of many-core computing, radio astronomy

More information

John Fragalla TACC 'RANGER' INFINIBAND ARCHITECTURE WITH SUN TECHNOLOGY. Presenter s Name Title and Division Sun Microsystems

John Fragalla TACC 'RANGER' INFINIBAND ARCHITECTURE WITH SUN TECHNOLOGY. Presenter s Name Title and Division Sun Microsystems TACC 'RANGER' INFINIBAND ARCHITECTURE WITH SUN TECHNOLOGY SUBTITLE WITH TWO LINES OF TEXT IF NECESSARY John Fragalla Presenter s Name Title and Division Sun Microsystems Principle Engineer High Performance

More information

NRAO VLA Archive Survey

NRAO VLA Archive Survey NRAO VLA Archive Survey Jared H. Crossley, Loránt O. Sjouwerman, Edward B. Fomalont, and Nicole M. Radziwill National Radio Astronomy Observatory, 520 Edgemont Road, Charlottesville, Virginia, USA ABSTRACT

More information

National Aeronautics and Space Administration Jet Propulsion Laboratory California Institute of Technology WISE Archive.

National Aeronautics and Space Administration Jet Propulsion Laboratory California Institute of Technology WISE Archive. Bruce Berriman / Steve Groom Infrared Science Archive (IRSA), IPAC/Caltech GBB/SLG - 1 WSDC Functional Block Diagram White Sands JPL UCLA HRP H/K MOS Maneuvers SOC Science Team Images FTP Site Ancillary

More information

Where is the Slack in the SDP System? Markus Dolensky

Where is the Slack in the SDP System? Markus Dolensky Where is the Slack in the SDP System? Markus Dolensky SKA Signal & Data Path along Design Data Intensive Astronomy LFAA DISH Correlator CSP Science Data Processor 2 SDP Design Challenge revisited Power

More information

The Role of InfiniBand Technologies in High Performance Computing. 1 Managed by UT-Battelle for the Department of Energy

The Role of InfiniBand Technologies in High Performance Computing. 1 Managed by UT-Battelle for the Department of Energy The Role of InfiniBand Technologies in High Performance Computing 1 Managed by UT-Battelle Contributors Gil Bloch Noam Bloch Hillel Chapman Manjunath Gorentla- Venkata Richard Graham Michael Kagan Vasily

More information

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014 InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment TOP500 Supercomputers, June 2014 TOP500 Performance Trends 38% CAGR 78% CAGR Explosive high-performance

More information

HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS

HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS OVERVIEW When storage demands and budget constraints collide, discovery suffers. And it s a growing problem. Driven by ever-increasing performance and

More information

EARLY EVALUATION OF THE CRAY XC40 SYSTEM THETA

EARLY EVALUATION OF THE CRAY XC40 SYSTEM THETA EARLY EVALUATION OF THE CRAY XC40 SYSTEM THETA SUDHEER CHUNDURI, SCOTT PARKER, KEVIN HARMS, VITALI MOROZOV, CHRIS KNIGHT, KALYAN KUMARAN Performance Engineering Group Argonne Leadership Computing Facility

More information

Introduc)on to Hyades

Introduc)on to Hyades Introduc)on to Hyades Shawfeng Dong Department of Astronomy & Astrophysics, UCSSC Hyades 1 Hardware Architecture 2 Accessing Hyades 3 Compu)ng Environment 4 Compiling Codes 5 Running Jobs 6 Visualiza)on

More information