Overview of Science Data Processor. Paul Alexander Rosie Bolton Ian Cooper Bojan Nikolic Simon Ratcliffe Harry Smith
|
|
- Deirdre Cain
- 6 years ago
- Views:
Transcription
1 Overview of Science Data Processor Paul Alexander Rosie Bolton Ian Cooper Bojan Nikolic Simon Ratcliffe Harry Smith
2 Consortium Management and System Engineering
3 Aims of the SDP Work Detailed design of complete SDP system to CDR Hardware platform Complete software stack including system and management layers Delivery is the design documentation supported by Prototyping required for verification of the design Requirements driven approach to SE
4 Management Structure
5 Management Team Lead: Paul Alexander PM: Harry Smith (acting) Deputy PM: Ian Cooper PE: Bojan Nikolic SE: Simon Ratcliffe PS: Rosie Bolton
6 Consortium Members Management Groupings Partner Workshare (%) Status University of Cambridge (Astrophysics & High Performance Full 8.3 Computing Groups) (UK) Netherlands Institute for Radio Astronomy Full 8.6 International Centre for Radio Astronomy Research (AUS) Full 7.5 SKA South Africa Full 7.2 STFC Laboratories (UK) Full 3.1 NIP Team Full 6.3 University of Manchester Max-Planck-Institut für Radioastronomie University of Oxford (Physics) University of Oxford (OeRC) (UK) Full 3.9 Chinese Universities Collaboration Full 22.0 New Zealand Universities Collaboration Full 2.7 Canadian Collaboration Full 12.2 Canadian Universities Collaboration CADC CANARIE Forschungszentrum Jülich (Germany) FULL/HPCF 2.1 Centre for High Performance Computing (SA) HPCF/FULL 3.6 ivec (AUS) HPCF 1.0 Centro Nacional de Supercomputación (ESP) HPCF 1.4 Fundación Centro de Supercomputación de Castilla y León (ESP) HPCF 1.0 Instituto de Telecomunicações (Portugal) Associate 3.1 University of Southampton (UK) * Associate 1.5 University College London (UK) Associate 1.5 University of Melbourne (AUS) Associate 1.0 French Universities Collaboration Associate 1.0 Universidad de Chile Associate 1.0
7 Consortium Industry Partnerss Organization Amazon AMD AVANTEK ARM Bull CISCO DDN Dell Geomerics GNODAL HP Intel Mellanox NAG NVIDIA Oracle Parallel Scientific SGI Thoughtworks Tilera Xyratex Area of Expertise SKA related verification projects E E accelerator options (GGPU), E E Accelerator software stacks E E ARM based architectures E E ARM based architectures, software stacks for E E processing cores File & object store architectures SKA related verification projects Storage system Architecture & streaming data handling, File & object store architectures, SKA related verification projects Storage system Architecture & streaming data handling, File & object store architectures E E Accelerator software stacks Interconnects and NICS Architecture, EE computing, BigData tools E E accelerator options (MIC), SKA related verification projects Interconnects and NICS Software models, software stacks for EE processing cores E E ARM based architectures, EE accelerator options (GGPU), E E Accelerator software stacks Storage system Architecture & streaming data handling, File & object store architectures Software models, File & object store architectures Storage system Architecture & streaming data handling, SKA related verification projects SKA related verification projects E E accelerator options (GGPU), E E Accelerator software stacks Storage system Architecture & streaming data handling, File & object store architectures
8 Original Proposed Schedule Motivation was the expectation of major system changes following re-baselining Follow standard SE approach of element PDRs following system PDR Baseline element architecture at PDR
9 Agreed Delivery Milestones Stage 1 CODE Description Date Value (M ) M1 Kick-off of the project Nov M2 Confirmation of Requirements Feb M3 M4 M5 Reconciliation of requirements with initial Functional Analysis Analysis of the impact of SDP requirements on performance critical algorithms (including baselinedependent averaging) Analysis of scaling of algorithms and components to computational scale of SKA March May June M6 Preliminary element architecture July M7 Key parameters for inputs into the System PDR (to be defined) Sep M8 Participation in system-level review Dec M9 Closure of stage 1 Dec
10 Agreed Delivery Milestones Stage 2 CODE Description Date Value (M ) M10 Kick-off of Stage 2 Jan M11 Element PDR Feb M12 Test Report: Component Interface Completeness Jun M13 Test Report: Key Performance Metrics of Hardware in the Open Architecture Lab (LAB) Sep M14 Test Report: Functional Completeness Dec M15 Test Report: Scaling of System Software Prototype on ISP HPC facilities March 2016 M16 Test Report: Performance vertical prototypes Jun M17 Test Report: Test of System Software Prototype with revised components from vertical prototyping 1 Aug M18 Element CDR submission Sep M19 CDR close-out and wind-up of work Nov
11 Work Breakdown and Scope
12 Scope Added WBS element to consider technical aspects of data delivery to users Designed to inform final decision on edge of observatory, does not imply adding additional responsibilities to the observatory Outside of costing envelope Assume ingest visibilities, pulsar candidates after search, other time-series data from CSP/SADT (TBC)
13 Design Approach Adopt Incremental and Iterative Design approach to the system engineering Fully exploited in the prototyping work. Distinguish between horizontal and vertical prototyping. Horizontal prototyping aims to provide a system-wide prototype Vertical prototyping provides detailed prototyping for performance and functionality of individual components
14 Design Approach Horizontal prototype of whole software system Be used to test and verify the system decomposition and specification of internal interfaces which emerge from the architectural analysis Test models for scalability to SKA1 and give consideration to scaling of the software system to SKA2. Prototype the system architecture to test for the required flexibility in architectural design Prototype and test the design models for loose versus tight coupling between software and hardware. Vertical prototyping of key components Close interaction with industry on emerging technologies Aim to roadmap into technology available for SKA1
15 Design Approach Open Architecture Lab Approach based on Laurence Livermore Hyperion initiative: Integrated work with industry partners Emphasis on a determining an appropriate scalable element Emphasis on system-level components of the software stack Create an evaluation and prototyping testbed for new hardware and software technologies to address: Petascale I/O technology scaling for SKA1 and future capacity to SKA2 Processor, memory, networking, storage, visualization, etc. Designed for future technology refresh, expansion, and upgrades Open source software stacks
16 Element Concept
17 Principles Behind Technical Solution Data parallelism provides a scaleable model through SKA1 to SKA2 Emphasis is on the framework to manage the throughput Hardware platform will be replaced on a short duty cycle c.f. any HPC facility The approach to data analysis for the SKA will evolve during operations as more is learnt about the system Require a framework in which observatory staff can develop efficient radio-astronomy specific code Integration with TM distinguishes the system from a normal HPC environment
18 Example Data Rates and Data Products Baseline Design Element Aperture Array Line experiment (e.g. EoR) 5 sq degrees; channels over 250 MHz bandwidth ~ 30 GB/s reducing quickly to ~ 1GB/s Up to 500 TB UV (Fourier) data; Images (3D) ~ 1.5 TB Imaging experiment with long baselines 50 km baseline with the low-frequency AA or SKA1_Survey 1.5 TB/s reducing to ~ 50 GB/s Up to 1000 TB/day to archive if we archive raw UV data Images (3D) ~ 27 TB Previous Data Rate (GB/s) BL Design (Ingest) (GB/s) LFAA Survey Mid Use Case (maximal) (GB/s)
19 Overall Architecture Imaging: Corner Turning Course Delays Ingest, flagging Visibility Steering Observation Buffer Gridding Visibilities Imaging Image Storage Incoming Data from Correlator, beamformer Switch Buffer store Ingest Processor Switch Buffer store UV Processor HPC Bulk Store Regional Data Centres Non-Imaging: Corner Turning Course Delays Ingest, flagging Beam Steering Observation Buffer Time-series Searching Search analysis Object/timing Storage Heterogeneous hardware architecture Homogeneous software stack
20 Imaging Processing Model Correlator Subtract current sky model from visibilities using current calibration model UV processors RFI excision and phase rotation Major cycle Grid UV data to form e.g. W-projection UV data store Image gridded data Deconvolveimaged data (minor cycle) Update current sky model Astronomical quality data Solve for telescope and image-plane calibration model Update calibration model Imaging processors
21 Use Case requirements and estimates (preliminary) SKA1 LOW CH2 EOR HI emission CH2 EOR source subtraction (continuum survey) CH3 HI absorption CH4 High redshift HI absorption Bmax (km) G(out) GB/s Nchan 2500 Varies with bl 80, ,000 GSamples/s Gridding ops / sample 6.3e3 63e3 13e3 3.2e4 PFlops/s (Gridding) UV Buffer, TBytes Observation length, hours Archive, 1000hrs of experiment (PBytes)
22 System Sizing SKA1 LOW / SURVEY (36 beams): Data rate out of correlator: 4670 GBytes/s (SURVEY), 842 GBytes/s (LOW) Max data rate into SDP: 995 GBytes/s (SURVEY: DRM Ch 3 H1 absorption, proportional to Nbeams, assuming 36) Max computing load (flops/s): 32 Pflops (SURVEY: DRM Ch 3 H1 absorption, proportional to Nbeams, assuming 36) Max UV buffer: 14 PBytes (SURVEY: DRM CH3 H1 absorption) SKA1 Mid: Data rate out of correlator: 1800 GBytes/s (BL design page 49) Max data rate into SDP: 255 GBytes/s (DRM CH3: H1 absorption, band 1) Max computing load: 10.0 Pflops/s (DRM CH3: H1 absorption, band 1) Max UV buffer: 11.0 PBytes (DRM CH3: H1 absorption, band 1)
23 Data Flow Telescope Manager Meta Data Science Data Processor Local M&C Master Controller Local M&C Database Science Data Processor Sky Models, Calibration Parameters... Multiple Reads Visibility processing Image Plane Processing Correlator / Beamformer Data Routing Ingest Data Routing Data Buffer Data Prodcuts Tiered Data Delivery Multiple Reads Time Series Search Time Series Processing
24 Data Flow Tiered Data Delivery Regional Centre Regional Centre SDP Core Facility South Africa Data routing Sub-set of Archive Sub-set of Archive Regional Centre Sub-set of Archive Cloud access SDP Core Facility Australia Astronomer Cloud
25 Example Implementation (current technology) 42U Rack Processing blade 1 Processing blade 2 Processing blade 3 Processing blade 4 Processing blade 5 Processing blade 6 Processing blade 7 Processing blade 8 Processing blade 9 Processing blade 10 Leaf Switch-1 56Gb/s Leaf Switch-2 56Gb/s Processing blade 11 Processing blade 12 Processing blade 13 Processing blade 14 Processing blade 15 Processing blade 16 Processing blade 17 Processing blade 18 Processing blade 19 Processing blade 20 56Gb/s To rack switches Processing Blade: Disk 1 1TB Disk 2 1TB Disk 3 1TB Host processor Multi-core X86 20 TFlop 2x56 Gb/s comms 4 TB storage <1kW power Disk 4 1TB M-Core - >10TFLOP/s PCI Bus Blade Specification GGPU, MIC,? M-Core - >10TFLOP/s Capable host (dual Xeon) Programmable Significant RAM
26 Functional Analysis
27 Software Stack SKA subsystems and service components High-level APIs and Tools UIF Toolkit SKA Common Software Application Framework Core Services Access Control Monitoring Archiver Live Data Access Logging System Alarm Service Configuration Management Scheduling Block Service Base Tools Communication Middleware Database Support Third-party tools and libraries Development tools Operating System
28
The Square Kilometre Array. Miles Deegan Project Manager, Science Data Processor & Telescope Manager
The Square Kilometre Array Miles Deegan Project Manager, Science Data Processor & Telescope Manager The Square Kilometre Array (SKA) The SKA is a next-generation radio interferometer: 3 telescopes, on
More informationSKA Computing and Software
SKA Computing and Software Nick Rees 18 May 2016 Summary Introduc)on System overview Compu)ng Elements of the SKA Telescope Manager Low Frequency Aperture Array Central Signal Processor Science Data Processor
More informationData Processing for the Square Kilometre Array Telescope
Data Processing for the Square Kilometre Array Telescope Streaming Workshop Indianapolis th October Bojan Nikolic Astrophysics Group, Cavendish Lab University of Cambridge Email: b.nikolic@mrao.cam.ac.uk
More informationAdaptive selfcalibration for Allen Telescope Array imaging
Adaptive selfcalibration for Allen Telescope Array imaging Garrett Keating, William C. Barott & Melvyn Wright Radio Astronomy laboratory, University of California, Berkeley, CA, 94720 ABSTRACT Planned
More informationSKA Low Correlator & Beamformer - Towards Construction
SKA Low Correlator & Beamformer - Towards Construction Dr. Grant Hampson 15 th February 2018 ASTRONOMY AND SPACE SCIENCE Presentation Outline Context + Specifications Development team CDR Status + timeline
More informationMeerKAT Data Architecture. Simon Ratcliffe
MeerKAT Data Architecture Simon Ratcliffe MeerKAT Signal Path MeerKAT Data Rates Online System The online system receives raw visibilities from the correlator at a sufficiently high dump rate to facilitate
More informationSKA Technical developments relevant to the National Facility. Keith Grainge University of Manchester
SKA Technical developments relevant to the National Facility Keith Grainge University of Manchester Talk Overview SKA overview Receptors Data transport and network management Synchronisation and timing
More informationNew Zealand Involvement in Solving the SKA Computing Challenges
New Zealand Involvement in Solving the SKA Computing Challenges D R ANDREW E N S O R D I R ECTO R H P C R ESEARC H L A B O R ATORY/ D I R ECTOR N Z SKA ALLIANCE COMPUTING FO R S K A COLLO Q U I UM 2 0
More informationSDP Design for Cloudy Regions
SDP Design for Cloudy Regions Markus Dolensky, 11/02/2016 2 ICRAR s Data Intensive Astronomy Group M.B. I.C. R.D. M.D. K.V. C.W. A.W. D.P. R.T. generously borrowed content from above colleagues 3 SDP Subelements
More informationThe Center for High Performance Computing. Dell Breakfast Events 20 th June 2016 Happy Sithole
The Center for High Performance Computing Dell Breakfast Events 20 th June 2016 Happy Sithole Background: The CHPC in SA CHPC User Community: South Africa CHPC Existing Users Future Users Introduction
More informationThe Canadian CyberSKA Project
The Canadian CyberSKA Project A. G. Willis (on behalf of the CyberSKA Project Team) National Research Council of Canada Herzberg Institute of Astrophysics Dominion Radio Astrophysical Observatory May 24,
More informationASKAP Data Flow ASKAP & MWA Archives Meeting
ASKAP Data Flow ASKAP & MWA Archives Meeting Ben Humphreys ASKAP Software and Computing Project Engineer 25 th March 2013 ASTRONOMY AND SPACE SCIENCE ASKAP @ Murchison Radioastronomy Observatory Australian
More informationComputational issues for HI
Computational issues for HI Tim Cornwell, Square Kilometre Array How SKA processes data Science Data Processing system is part of the telescope Only one system per telescope Data flow so large that dedicated
More informationAA CORRELATOR SYSTEM CONCEPT DESCRIPTION
AA CORRELATOR SYSTEM CONCEPT DESCRIPTION Document number WP2 040.040.010 TD 001 Revision 1 Author. Andrew Faulkner Date.. 2011 03 29 Status.. Approved for release Name Designation Affiliation Date Signature
More informationTelescope Manager (TM) Consortium : from Fremantle to Penticton (and beyond )
Telescope Manager (TM) Consortium : from Fremantle to Penticton (and beyond ) Yashwant Gupta Team Lead for TM SKA Engineering Meeting, Penticton, 11 th Nov 2015 Outline Overview : TM roles & responsibilities
More informationCase Study: CyberSKA - A Collaborative Platform for Data Intensive Radio Astronomy
Case Study: CyberSKA - A Collaborative Platform for Data Intensive Radio Astronomy Outline Motivation / Overview Participants / Industry Partners Documentation Architecture Current Status and Services
More informationPDR.01.01: ASSUMPTIONS AND NON-CONFORMANCE
PDR.0.0: ASSUMPTIONS AND NON-CONFORMANCE Document number Context MGT Revision Authors Ian Cooper, Ronald Nijboer Release Date 205-02-09 Document Classification Status Draft 205-02-09 Page of 6 Name Designation
More informationTHE SQUARE KILOMETER ARRAY (SKA) ESD USE CASE
THE SQUARE KILOMETER ARRAY (SKA) ESD USE CASE Ronald Nijboer Head ASTRON R&D Computing Group With material from Chris Broekema (ASTRON) John Romein (ASTRON) Nick Rees (SKA Office) Miles Deegan (SKA Office)
More informationComputational synergies between LSST and SKA
Computational synergies between LSST and SKA Bob Mann University of Edinburgh LSST:UK Project Leader www.lsst.ac.uk LSST:UK Consortium LSST:UK Science Centre Scientific synergies Bacon et al (arxiv: 1501.03977)
More informationEuropean VLBI Network
European VLBI Network Cormac Reynolds, JIVE European Radio Interferometry School, Bonn 12 Sept. 2007 EVN Array 15 dissimilar telescopes Observes 3 times a year (approx 60 days per year) Includes some
More informationSKA Telescope Manager (TM): Status and Architecture Overview
SKA Telescope Manager (TM): Status and Architecture Overview Swaminathan Natarajan* a, Domingos Barbosa b, Joao Paulo Barraca bc, Alan Bridger d, Subhrojyoti Roy Choudhuri a, Matteo DiCarlo e, Mauro Dolci
More informationSKA Regional Centre Activities in Australasia
SKA Regional Centre Activities in Australasia Dr Slava Kitaeff CSIRO-ICRAR APSRC Project Engineer ERIDANUS National Project Lead Why SKA Regional Centres? SKA 1 Observatory Compute capacity: 100 Pflops
More informationFast Holographic Deconvolution
Precision image-domain deconvolution for radio astronomy Ian Sullivan University of Washington 4/19/2013 Precision imaging Modern imaging algorithms grid visibility data using sophisticated beam models
More informationA.J. Faulkner K. Zarb-Adami
AJ Faulkner K Zarb-Adami March 2015 LFAA LMC - Trieste Andrew Faulkner Kris Zarb-Adami SKA1-low requirements (after RBS) Frequency: 50MHz 350MHz Scan angle: >45 Bandwidth: 300MHz # of beams: >5 Sensitivity
More informationThe Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center
The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.
More informationTM Interfaces. Interface Management Workshop June 2013
TM Interfaces Interface Management Workshop June 2013 TM Scope and Boundaries TM in SKA PBS TM Scope and Boundaries TM Physical Context (Direct interfaces) ILS INFRA TM Ops Team Interface Classes: Mechanical
More informationCSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science
CSD3 The Cambridge Service for Data Driven Discovery A New National HPC Service for Data Intensive science Dr Paul Calleja Director of Research Computing University of Cambridge Problem statement Today
More informationSquare Kilometre Array
Square Kilometre Array C4SKA 16 February 2018 David Luchetti Australian SKA Project Director Presentation 1. Video of the Murchison Radioastronomy Observatory 2. Is SKA a Collaborative project? 3. Australia/New
More informationWhere is the Slack in the SDP System? Markus Dolensky
Where is the Slack in the SDP System? Markus Dolensky SKA Signal & Data Path along Design Data Intensive Astronomy LFAA DISH Correlator CSP Science Data Processor 2 SDP Design Challenge revisited Power
More informationASKAP Central Processor: Design and Implementa8on
ASKAP Central Processor: Design and Implementa8on Calibra8on and Imaging Workshop 2014 Ben Humphreys ASKAP So(ware and Compu3ng Project Engineer 3rd - 7th March 2014 ASTRONOMY AND SPACE SCIENCE Australian
More informationSKA 1 Infrastructure Element Australia. SKA Engineering Meeting Presentation 8 th October 2013
SKA 1 Infrastructure Element Australia SKA Engineering Meeting Presentation 8 th October 2013 1. Contents Introduction Management Structures Overview Work Breakdown Structure Proposal Review Prototypes
More informationCurrent Developments in the NRC Correlator Program
Current Developments in the NRC Correlator Program Lewis B.G. Knee (on behalf of the NRC Herzberg Correlator Group) ALMA Developers Workshop Chalmers University of Technology, Gothenburg May 26, 2016 Main
More informationPDR.04 Interface Requirements. Context... SE. Revision: 01. Author. F. Graser, B. Opperman. Document Classification.. Unrestricted Status..
PDR.04 Interface Requirements Document number SKA-TEL-SDP-0000039 Context.... SE Revision 01 Author. F. Graser, B. Opperman Release Date...2015-02-09 Document Classification.. Status.. Draft Name Ferdl
More informationSPDO report. R. T. Schilizzi. US SKA Consortium meeting Pasadena, 15 October 2009
SPDO report R. T. Schilizzi US SKA Consortium meeting Pasadena, 15 October 2009 SPDO Team Project Director Project Engineer Project Scientist (0.5 fte) Executive Officer System Engineer Domain Specialist
More informationPoS(10th EVN Symposium)098
1 Joint Institute for VLBI in Europe P.O. Box 2, 7990 AA Dwingeloo, The Netherlands E-mail: szomoru@jive.nl The, a Joint Research Activity in the RadioNet FP7 2 programme, has as its aim the creation of
More informationGPUS FOR NGVLA. M Clark, April 2015
S FOR NGVLA M Clark, April 2015 GAMING DESIGN ENTERPRISE VIRTUALIZATION HPC & CLOUD SERVICE PROVIDERS AUTONOMOUS MACHINES PC DATA CENTER MOBILE The World Leader in Visual Computing 2 What is a? Tesla K40
More informationVectorisation and Portable Programming using OpenCL
Vectorisation and Portable Programming using OpenCL Mitglied der Helmholtz-Gemeinschaft Jülich Supercomputing Centre (JSC) Andreas Beckmann, Ilya Zhukov, Willi Homberg, JSC Wolfram Schenck, FH Bielefeld
More informationThe Stampede is Coming: A New Petascale Resource for the Open Science Community
The Stampede is Coming: A New Petascale Resource for the Open Science Community Jay Boisseau Texas Advanced Computing Center boisseau@tacc.utexas.edu Stampede: Solicitation US National Science Foundation
More informationBaseline Project Definition
Baseline Project Definition Project Manager 1 Outline Project Goals Budget Schedule Metrics for Completion Management structures Technical oversight and testing Procedures for evaluating system performance
More informationNERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber
NERSC Site Update National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory Richard Gerber NERSC Senior Science Advisor High Performance Computing Department Head Cori
More informationStorage Systems Market Analysis Dec 04
Storage Systems Market Analysis Dec 04 Storage Market & Technologies World Wide Disk Storage Systems Market Analysis Wor ldwi d e D i s k Storage S y s tems Revenu e b y Sup p l i e r, 2001-2003 2001
More informationPowering Real-time Radio Astronomy Signal Processing with latest GPU architectures
Powering Real-time Radio Astronomy Signal Processing with latest GPU architectures Harshavardhan Reddy Suda NCRA, India Vinay Deshpande NVIDIA, India Bharat Kumar NVIDIA, India What signals we are processing?
More informationTHE EUCLID ARCHIVE SYSTEM: A DATA-CENTRIC APPROACH TO BIG DATA
THE EUCLID ARCHIVE SYSTEM: A DATA-CENTRIC APPROACH TO BIG DATA Rees Williams on behalf of A.N.Belikov, D.Boxhoorn, B. Dröge, J.McFarland, A.Tsyganov, E.A. Valentijn University of Groningen, Groningen,
More informationOSKAR-2: Simulating data from the SKA
OSKAR-2: Simulating data from the SKA AACal 2012, Amsterdam, 13 th July 2012 Fred Dulwich, Ben Mort, Stef Salvini 1 Overview OSKAR-2: Interferometer and beamforming simulator package. Intended for simulations
More informationEU Research Infra Integration: a vision from the BSC. Josep M. Martorell, PhD Associate Director
EU Research Infra Integration: a vision from the BSC Josep M. Martorell, PhD Associate Director 11/2017 Ideas on 3 topics: 1. The BSC as a Research Infrastructure 2. The added-value of an European RI for
More informationNetherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties
Netherlands Institute for Radio Astronomy Update LOFAR Long Term Archive May 18th, 2009 Hanno Holties LOFAR Long Term Archive (LTA) Update Status Architecture Data Management Integration LOFAR, Target,
More informationVLBI progress Down-under. Tasso Tzioumis Australia Telescope National Facility (ATNF) 25 September 2008
VLBI progress Down-under Tasso Tzioumis Australia Telescope National Facility (ATNF) 25 September 2008 Outline Down-under == Southern hemisphere VLBI in Australia (LBA) Progress in the last few years Disks
More informationCyberSKA: Project Update. Cameron Kiddle, CyberSKA Technical Coordinator
CyberSKA: Project Update Cameron Kiddle, CyberSKA Technical Coordinator What is CyberSKA? Initiative to develop a scalable and distributed cyberinfrastructure platform to meet evolving science needs of
More informationJÜLICH SUPERCOMPUTING CENTRE Site Introduction Michael Stephan Forschungszentrum Jülich
JÜLICH SUPERCOMPUTING CENTRE Site Introduction 09.04.2018 Michael Stephan JSC @ Forschungszentrum Jülich FORSCHUNGSZENTRUM JÜLICH Research Centre Jülich One of the 15 Helmholtz Research Centers in Germany
More informationShared Risk Observing
Shared Risk Observing EVLA Advisory Committee Meeting, March 19-20, 2009 Claire Chandler Deputy AD for Science, NM Ops Motivation Early science: Want to get EVLA science capabilities i into the hands of
More informationEnergy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA Third Workshop on Innovating the Network for Data-Intensive Science, INDIS16
Energy-Efficient Data Transfers in Radio Astronomy with Software UDP RDMA Third Workshop on Innovating the Network for Data-Intensive Science, INDIS16 Przemek Lenkiewicz, Researcher@IBM Netherlands Bernard
More informationData Centres in the Virtual Observatory Age
Data Centres in the Virtual Observatory Age David Schade Canadian Astronomy Data Centre A few things I ve learned in the past two days There exist serious efforts at Long-Term Data Preservation Alliance
More informationTiny GPU Cluster for Big Spatial Data: A Preliminary Performance Evaluation
Tiny GPU Cluster for Big Spatial Data: A Preliminary Performance Evaluation Jianting Zhang 1,2 Simin You 2, Le Gruenwald 3 1 Depart of Computer Science, CUNY City College (CCNY) 2 Department of Computer
More informationThe Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed. November 2008
1 The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed November 2008 Extending leadership to the HPC community November 2008 2 Motivation Collaborations Hyperion Cluster Timeline
More informationSCHEDULE AND TIMELINE
MMA Project Book, Chapter 19 SCHEDULE AND TIMELINE Richard Simon Last Changed 1999-Apr-21 Revision History: 11 November 1998: Complete update from baseline WBS plan. Links to internal NRAO web pages with
More informationScientific data processing at global scale The LHC Computing Grid. fabio hernandez
Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since
More informationJülich Supercomputing Centre
Mitglied der Helmholtz-Gemeinschaft Jülich Supercomputing Centre Norbert Attig Jülich Supercomputing Centre (JSC) Forschungszentrum Jülich (FZJ) Aug 26, 2009 DOAG Regionaltreffen NRW 2 Supercomputing at
More informationSummary of Data Management Principles
Large Synoptic Survey Telescope (LSST) Summary of Data Management Principles Steven M. Kahn LPM-151 Latest Revision: June 30, 2015 Change Record Version Date Description Owner name 1 6/30/2015 Initial
More informationMilestone Solution Partner IT Infrastructure Components Certification Report
Milestone Solution Partner IT Infrastructure Components Certification Report Dell MD3860i Storage Array Multi-Server 1050 Camera Test Case 4-2-2016 Table of Contents Executive Summary:... 3 Abstract...
More informationSKA SDP : A snapshot of recent technical directions and conclusions
SKA SDP : A snapshot of recent technical directions and conclusions Chris Broekema ASTRON Netherlands Institute for Radio Astronomy Highlights A whirlwind overview of recent prototyping and design work
More informationEuropean energy efficient supercomputer project
http://www.montblanc-project.eu European energy efficient supercomputer project Simon McIntosh-Smith University of Bristol (Based on slides from Alex Ramirez, BSC) Disclaimer: Speaking for myself... All
More informationInfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014
InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment TOP500 Supercomputers, June 2014 TOP500 Performance Trends 38% CAGR 78% CAGR Explosive high-performance
More informationIntel Enterprise Processors Technology
Enterprise Processors Technology Kosuke Hirano Enterprise Platforms Group March 20, 2002 1 Agenda Architecture in Enterprise Xeon Processor MP Next Generation Itanium Processor Interconnect Technology
More informationSKA SDP-COMP Middleware: The intersect with commodity computing. Piers Harding // February, 2017
SKA SDP-COMP Middleware: The intersect with commodity computing Piers Harding // February, 2017 Overview SDP Middleware why is this important What are the options Middleware where is industry heading What
More informationJohn W. Romein. Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands
Signal Processing on GPUs for Radio Telescopes John W. Romein Netherlands Institute for Radio Astronomy (ASTRON) Dwingeloo, the Netherlands 1 Overview radio telescopes six radio telescope algorithms on
More information4th TERENA NRENs and Grids Workshop, Amsterdam, Dec. 6-7, Marcin Lawenda Poznań Supercomputing and Networking Center
Marcin Lawenda Poznań Supercomputing and Networking Center Why Vlabs? VERY limited access Main reason - COSTS Main GOAL - to make commonly accessible Added Value virtual, remote,,...grid Grid-enabled Virtual
More informationDeveloping A Universal Radio Astronomy Backend. Dr. Ewan Barr, MPIfR Backend Development Group
Developing A Universal Radio Astronomy Backend Dr. Ewan Barr, MPIfR Backend Development Group Overview Why is it needed? What should it do? Key concepts and technologies Case studies: MeerKAT FBF and APSUSE
More informationDistributed Archive System for the Cherenkov Telescope Array
Distributed Archive System for the Cherenkov Telescope Array RIA-653549 Eva Sciacca, S. Gallozzi, A. Antonelli, A. Costa INAF, Astrophysical Observatory of Catania INAF, Astronomical Observatory of Rome
More informationTianhe-2, the world s fastest supercomputer. Shaohua Wu Senior HPC application development engineer
Tianhe-2, the world s fastest supercomputer Shaohua Wu Senior HPC application development engineer Inspur Inspur revenue 5.8 2010-2013 6.4 2011 2012 Unit: billion$ 8.8 2013 21% Staff: 14, 000+ 12% 10%
More informationNext Generation Computing Architectures for Cloud Scale Applications
Next Generation Computing Architectures for Cloud Scale Applications Steve McQuerry, CCIE #6108, Manager Technical Marketing #clmel Agenda Introduction Cloud Scale Architectures System Link Technology
More informationProject Overview and Status
Project Overview and Status EVLA Advisory Committee Meeting, March 19-20, 2009 Mark McKinnon EVLA Project Manager Outline Project Goals Organization Staffing Progress since last meeting Budget Contingency
More informationThe Virtual Observatory in Australia Connecting to International Initiatives. Peter Lamb. CSIRO Mathematical & Information Sciences
The Virtual Observatory in Australia Connecting to International Initiatives Peter Lamb CSIRO Mathematical & Information Sciences The Grid & escience Convergence of high-performance computing, huge data
More informationPast, Present and Future of EPICS in ASKAP
Past, Present and Future of EPICS in ASKAP J.C. Guzman ASKAP Computing IPT Leader 26 th March 2015 SKA LMC Workshop, Trieste Italy ASTRONOMY AND SPACE SCIENCE The Evaluation/Selection Process A short history
More informationCisco Universal Small Cell 8050 Enterprise Management System
Data Sheet Cisco Universal Small Cell 8050 Enterprise Management System The Cisco Universal Small Cell 8050 Enterprise Management System (USC 8050 EMS) is part of the Cisco Universal Small Cell Solution,
More informationIt's the end of the world as we know it
It's the end of the world as we know it Simon McIntosh-Smith University of Bristol HPC Research Group simonm@cs.bris.ac.uk 1 Background Graduated as Valedictorian in Computer Science from Cardiff University
More informationI&T&C Organization Chart
I&T&C Organization Chart I&T&C Manager Elliott Bloom WBS 4.1.9 I&T Engineer B. Grist WBS 4.1.9.1 Reliability & QA D. Marsh WBS 4.1.9.2 Mechanical Ground Support Equipment TBD Instrument Operations Coordinator
More informationSKA Central Signal Processor Local Monitor and Control
SKA Central Signal Processor Local Monitor and Control Sonja Vrcic, NRC-Herzberg, Canada SKA LMC Standardization Workshop Trieste, Italy, 25-27 March 2015 Outline 1. CSP design and architecture. 2. Monitor
More informationSDP Execution Framework Design
SDP Execution Framework Design Document Number. SKA TEL SDP 0000015 Document Type.... DRE Revision......02 Authors.. Andreas Wicenec, Dave Pallot, Rodrigo Tobar, Kevin Vinsen, Chen Wu, Paul Alexander,
More informationInterconnection of Armenian e- Infrastructures with the pan- Euroepan Integrated Environments
Interconnection of Armenian e- Infrastructures with the pan- Euroepan Integrated Environments H. Astsatryan Institute for Informatics and Automation Problems, National Academy of Sciences of the Republic
More informationThe Mont-Blanc approach towards Exascale
http://www.montblanc-project.eu The Mont-Blanc approach towards Exascale Alex Ramirez Barcelona Supercomputing Center Disclaimer: Not only I speak for myself... All references to unavailable products are
More informationIntel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins
Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins Outline History & Motivation Architecture Core architecture Network Topology Memory hierarchy Brief comparison to GPU & Tilera Programming Applications
More informationEvolution of the ATLAS PanDA Workload Management System for Exascale Computational Science
Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science T. Maeno, K. De, A. Klimentov, P. Nilsson, D. Oleynik, S. Panitkin, A. Petrosyan, J. Schovancova, A. Vaniachine,
More informationUsers and utilization of CERIT-SC infrastructure
Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user
More informationThe Computation and Data Needs of Canadian Astronomy
Summary The Computation and Data Needs of Canadian Astronomy The Computation and Data Committee In this white paper, we review the role of computing in astronomy and astrophysics and present the Computation
More informationCSIRO ASKAP Science Data Archive CSIRO ASTRONOMY AND SPACE SCIENCE (CASS)
CSIRO ASKAP Science Data Archive CSIRO ASTRONOMY AND SPACE SCIENCE (CASS) Jessica Chapman, ATUC Meeting, 5 December 2013 CSIRO ASKAP Science Data Archive (CASDA) Talk outline A: CASDA overview and timeline
More informationProtocol and measurement support for high-speed applications: Ongoing work in the projects 46PaQ, MASTS & ESLEA
Protocol and measurement support for high-speed applications: Ongoing work in the projects 46PaQ, MASTS & ESLEA Saleem Bhatti Networked and Distributed Systems (NDS) Research Group Computer Science, University
More informationA GPU based brute force de-dispersion algorithm for LOFAR
A GPU based brute force de-dispersion algorithm for LOFAR W. Armour, M. Giles, A. Karastergiou and C. Williams. University of Oxford. 8 th May 2012 1 GPUs Why use GPUs? Latest Kepler/Fermi based cards
More informationSCHEDULE AND TIMELINE. The Project WBS, expanded to level 2, presented in the form of a Gantt chart
ALMA Test Interferometer Project Book, Chapter 14 SCHEDULE AND TIMELINE Richard Simon Last Changed 2000-Feb-22 Revision History: 2000-Feb-22: Initial version created (R. Simon) Introduction This chapter
More informationArchitectures for Scalable Media Object Search
Architectures for Scalable Media Object Search Dennis Sng Deputy Director & Principal Scientist NVIDIA GPU Technology Workshop 10 July 2014 ROSE LAB OVERVIEW 2 Large Database of Media Objects Next- Generation
More informationEVLA Software High-Level Design Presentation Notes
Expanded Very Large Array EVLA-SW-??? Revision:1.0 2004-Feb-25 Presentation Notes B.Waters EVLA Software High-Level Design Presentation Notes EVLA Software Design Group: T. Morgan, K. Ryan, K. Sowinski,
More informationASKAP Pipeline processing and simulations. Dr Matthew Whiting ASKAP Computing, CSIRO May 5th, 2010
ASKAP Pipeline processing and simulations Dr Matthew Whiting ASKAP Computing, CSIRO May 5th, 2010 ASKAP Computing Team Members Team members Marsfield: Tim Cornwell, Ben Humphreys, Juan Carlos Guzman, Malte
More informationSignal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA
Signal processing with heterogeneous digital filterbanks: lessons from the MWA and EDA Randall Wayth ICRAR/Curtin University with Marcin Sokolowski, Cathryn Trott Outline "Holy grail of CASPER system is
More informationAim High. Intel Technical Update Teratec 07 Symposium. June 20, Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group
Aim High Intel Technical Update Teratec 07 Symposium June 20, 2007 Stephen R. Wheat, Ph.D. Director, HPC Digital Enterprise Group Risk Factors Today s s presentations contain forward-looking statements.
More informationMilestone Solution Partner IT Infrastructure Components Certification Report
Milestone Solution Partner IT Infrastructure Components Certification Report Dell Storage PS6610, Dell EqualLogic PS6210, Dell EqualLogic FS7610 July 2015 Revisions Date July 2015 Description Initial release
More informationThe Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research
The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research Dr Paul Calleja Director of Research Computing University of Cambridge Global leader in science & technology
More informationHPC Capabilities at Research Intensive Universities
HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)
More informationTightly Coupled Accelerators Architecture
Tightly Coupled Accelerators Architecture Yuetsu Kodama Division of High Performance Computing Systems Center for Computational Sciences University of Tsukuba, Japan 1 What is Tightly Coupled Accelerators
More informationPractical Scientific Computing
Practical Scientific Computing Performance-optimized Programming Preliminary discussion: July 11, 2008 Dr. Ralf-Peter Mundani, mundani@tum.de Dipl.-Ing. Ioan Lucian Muntean, muntean@in.tum.de MSc. Csaba
More informationEuropean Ground Systems - Common Core (EGS-CC) ASI Italian Information Day
European Ground Systems - Common Core (EGS-CC) ASI Italian Information Day The next generation Functional Verification Test facilities (EGSE, ATB, SVF) & Mission Control Systems (MCS) K. Hjortnaes/N. Peccia
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY HAYSTACK OBSERVATORY
MASSACHUSETTS INSTITUTE OF TECHNOLOGY HAYSTACK OBSERVATORY WESTFORD, MASSACHUSETTS 01886-1299 LOFAR MEMO #002 September 3, 2001 Phone: (978) 692-4764 Fax : (781) 981-0590 To: From: Subject: LOFAR Group
More information