Computing and Networking at Diamond Light Source. Mark Heron Head of Control Systems

Size: px
Start display at page:

Download "Computing and Networking at Diamond Light Source. Mark Heron Head of Control Systems"

Transcription

1 Computing and Networking at Diamond Light Source Mark Heron Head of Control Systems

2 Harwell Science and Innovation Campus ISIS (Spallation Neutron Source) Central Laser Facility LHC Tier 1 computing Research Complex (for users of Diamond, ISIS and CLF) Diamond Light Source

3 Diamond Light Source

4 Science SR Examples Pharmaceutical manufacture & processing Casting aluminium Non-destructive imaging of fossils Structure of the Histamine H1 receptor

5 Beamlines or Instruments Operational period 25/2/2016 An Introduction to the Diamond Software Developers Away Day 2016

6 Data Size in PB Cumulative Cumulative Amount Amount of of Data Data Generated Generated By By Diamond Diamond Jan-07 Jan-08 Jan-09 Jan-10 Jan-11 Jan-12 Jan-13 Jan-14 Jan-15 Jan-16

7 10000 Data Rates Detector Performance (MB/s) No detector faster than ~10 MB/sec 2009 Pilatus 6M system 60 MB/s Hz Pilatus 6M 150 MB/s Hz Pilatus 6M 600 MB/sec 2013 ~10 beamlines with 10 GbE detectors (mainly Pilatus and PCO Edge) 2016 Percival detector 6GB/sec

8 Data Rates

9

10 Electron Microscope Life Science EMs 2x Titan Krios Electron Microscopes Gatan Quantum Detector 600MB/s 2x Physical Science EMs to come 2x further Life Science EMs to come

11 Example of an EM Processing 17,500 particles 10 hours data collection Average resolution 3.2 Å processed with Relion Dan Clare, Sonja Welsch

12 Typical User Setup

13 GDA User Interface Rich GUI clients widgets, views, or perspectives using Eclipse plugin framework Terminal Live Plotting Script Editor Analysis & Visualisation Log View

14 Data Flow

15

16

17 High throughput MX at DLS 13:11:18 < 2m20s 13:09:01

18

19 First Line Storage ~5PB Online storage 430TB, 12 Gbyte/sec, Lustre file system 880TB, 16 Gbyte/sec, GPFS file system 3.6PB, 30 Gbyte/sec, GPFS file system, uses GPFS clusters Commissioned start of TB, 1 Gbyte/sec general purpose NAS file system for slower beamlines Lustre Vs GPFS Lustre good for high aggregate cluster processing rates GPFS better for single point data rates GPFS we had problems of interactions between clients, addressed through clustering

20 Compute Cluster Structured as 6 clusters, with a range of capabilities 135 Nodes providing 1824 x86 CPU cores Between 32GB and 256GB RAM (2GB/core to 12GB/core) depending on age 118 GPU nodes Mostly used by MX and Tomography beamlines All accessed via one Univa Grid Engine interface Multiple queues

21 Network Core network All high data rate beamlines are connected to two core Switches, with 2x10 GbE links or 2x40GbE. Clusters and Storage connected to two core Switches, with 2, 4 or 8 x 10 GbE links. Cluster and Storage connected with InfiniBand Beamlines Most detectors and workstations connected with 1 GbE links High performance systems connected with 10 GbE or 40GbE Becoming the norm

22 Network Bandwidth Balance 10 Gbit/s 1 Gbit/s 10 Gbit/s 40 Gbit/s Beamline Switch Beamline Switch 10 Gbit/s 40 Gbit/s 400 Gbit/s Central Switch 400 Gbit/s 40x10 Gbit/s Cluster Switch Disks 80 Gbit/s GPFS 40 Gbit/s Lustre Cluster IB 10x56Gbit/s

23 High Density Computer Room 22 racks, each rack 20 kw peak cooling 2 x 20 kw peak power 320 kw redundant power From two separate sub-stations Power from on side is UPS and generator backed up. 320 kw cooling water. Primary cooling is from site chilled water 220 kw standby chiller in case of problems. Its full Options for next phase

24 cluster 40x20 core cluster clusters 3PB parallel file system

25 Realtime Data Storage and Analysis

26 Data Archive STFC tape storage Tape Libraries: 2 x SD ,000 slot robots 64 tape drives Potential capacity 100PB Data management and archive systems: Storage-D Storage Resource Broker I-RODS Storage Resource Manager ICAT Top CAT

27 Moving Data Off Site a Science DMZ

28 More of same Future Detectors, automation, processing pipelines.. Develop post visit processing services Making visit archives and application environment available to other systems SCARF,.. Working with STFC SCD on UKT0 and Ada Lovelace Centre

29 Thank you

30

31 3600 image data set - images to density 2 minutes

32 I02/VMXi - Pilatus2 25 Hz I03 - Pilatus3 100 Hz I04 - Pilatus2 25 Hz I Pilatus2 25 Hz I Pilatus2 30 Hz I23 - Pilatus2 12 Hz I24 - Pilatus3 100 Hz VMXm - TBC Aggregate: > 300 frames/s > 1800 MB/s sustained rate - sample exchange < 20s Worth noting that Eiger 16M at 133 frames/s will actually reduce file system load compared with 100 frame/s Pilatus 6M

33 Distributed Control Systems

34 Realtime Feedback & Feedforward TMFB Controller Dispersion FB Controller Orbit FB Controller Coupling FB Controller Tune FB Controller Emit. Emit Y pbpm Pos X,Y Pos X,Y Tunes X, Y Gap Tune FF Controller Optics FF Controller Orbit FF Controller Master Osc RF Amp ebpm TMBF ID Camera pbpms 172 ebpms Buttons Insertion Device Corrs local to ID Quads local to ID 248 Quads 96 Skew Quad 172 Corrs on Sext RF Cavity Strip lines

35 EPICS and GDA both need to write the data file. We use HDF5 links to create one logical file from multiple real files. Avoids file contention issues. Allows detector files to be highly optimised for performance. The header data is written directly by GDA. The detector data is written using EPICS HDF5 area detector plugin. Data Formats GDA Header Data.nxs file EPICS Detector Data.hdf5 file Detector Data.hdf5 file

36 Example: Tomography Tomography scans are demanding: Data rate ~ 500 MB/s. Data size > 100 GB. First operation is read data perpendicular to write direction. Classic matrix transpose problem Real challenge for typical cache design. Completely unsuited to running inside the GDA server. Image Frames Sinogram Frames

37 Tomography data file format Data must be optimised for reading sinograms. All frames are written to a single file. File is arranged in chunks of a fixed number of rows and a fixed number of frames. The chunk size matches the Lustre stripe size so is written to a different Lustre server. Data is in cache until all frames in a chunk are written.

38 New Detector Parallel Control & DAQ Large scale/high speed detector developments Percival 300Hz, 120Hz (6GB/s) Excalibur to use the same framework for scalability/swmr/vds TimePix3 Large Area Time Resolved Detector

39 Network Layout

Diamond Networks/Computing. Nick Rees January 2011

Diamond Networks/Computing. Nick Rees January 2011 Diamond Networks/Computing Nick Rees January 2011 2008 computing requirements Diamond originally had no provision for central science computing. Started to develop in 2007-2008, with a major development

More information

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments Torben Kling-Petersen, PhD Presenter s Name Principle Field Title andengineer Division HPC &Cloud LoB SunComputing Microsystems

More information

PaBdataODI-8.6 Deliverable: D8.6. PaN-data ODI. Deliverable D8.6

PaBdataODI-8.6 Deliverable: D8.6. PaN-data ODI. Deliverable D8.6 PaN-data ODI Deliverable D8.6 Draft: D8.6: Evaluation of coupling of prototype to multi-core architectures (Month 36 - October 2014) Grant Agreement Number Project Title RI-283556 PaN-data Open Data Infrastructure

More information

Metadata Models for Experimental Science Data Management

Metadata Models for Experimental Science Data Management Metadata Models for Experimental Science Data Management Brian Matthews Facilities Programme Manager Scientific Computing Department, STFC Co-Chair RDA Photon and Neutron Science Interest Group Task lead,

More information

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato *1, Shuichi Ihara *2, Satoshi Matsuoka *1 *1 Tokyo Institute

More information

Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads

Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads Liran Zvibel CEO, Co-founder WekaIO @liranzvibel 1 WekaIO Matrix: Full-featured and Flexible Public or Private S3 Compatible

More information

DAQ system at SACLA and future plan for SPring-8-II

DAQ system at SACLA and future plan for SPring-8-II DAQ system at SACLA and future plan for SPring-8-II Takaki Hatsui T. Kameshima, Nakajima T. Abe, T. Sugimoto Y. Joti, M.Yamaga RIKEN SPring-8 Center IFDEPS 1 Evolution of Computing infrastructure from

More information

Tape Data Storage in Practice Minnesota Supercomputing Institute

Tape Data Storage in Practice Minnesota Supercomputing Institute Tape Data Storage in Practice Minnesota Supercomputing Institute GlobusWorld 208 Jeffrey McDonald, PhD Associate Director for Operations HPC Resources MSI Users PI Accounts: 843 Users: > 4600 Mesabi Cores:

More information

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support Data Management Dr David Henty HPC Training and Support d.henty@epcc.ed.ac.uk +44 131 650 5960 Overview Lecture will cover Why is IO difficult Why is parallel IO even worse Lustre GPFS Performance on ARCHER

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

Lustre architecture for Riccardo Veraldi for the LCLS IT Team

Lustre architecture for Riccardo Veraldi for the LCLS IT Team Lustre architecture for LCLS@SLAC Riccardo Veraldi for the LCLS IT Team 2 LCLS Experimental Floor 3 LCLS Parameters 4 LCLS Physics LCLS has already had a significant impact on many areas of science, including:

More information

Introduction to High Performance Parallel I/O

Introduction to High Performance Parallel I/O Introduction to High Performance Parallel I/O Richard Gerber Deputy Group Lead NERSC User Services August 30, 2013-1- Some slides from Katie Antypas I/O Needs Getting Bigger All the Time I/O needs growing

More information

Big Scientific Data and Data Science. Professor Tony Hey Chief Data Scientist Rutherford Appleton Laboratory, STFC

Big Scientific Data and Data Science. Professor Tony Hey Chief Data Scientist Rutherford Appleton Laboratory, STFC Big Scientific Data and Data Science Professor Tony Hey Chief Data Scientist Rutherford Appleton Laboratory, STFC tony.hey@stfc.ac.uk e-science and the Fourth Paradigm Thousand years ago Experimental Science

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

Parallel Storage Systems for Large-Scale Machines

Parallel Storage Systems for Large-Scale Machines Parallel Storage Systems for Large-Scale Machines Doctoral Showcase Christos FILIPPIDIS (cfjs@outlook.com) Department of Informatics and Telecommunications, National and Kapodistrian University of Athens

More information

Monash High Performance Computing

Monash High Performance Computing MONASH eresearch Monash High Performance Computing Gin Tan Senior HPC Consultant MeRC (Monash eresearch) Monash HPC Infrastructure MASSIVE MonARCH Characterisation VL and Instruments MASSIVE-3 MeRC Infrastructure

More information

Milestone Solution Partner IT Infrastructure Components Certification Report

Milestone Solution Partner IT Infrastructure Components Certification Report Milestone Solution Partner IT Infrastructure Components Certification Report Dell Storage PS6610, Dell EqualLogic PS6210, Dell EqualLogic FS7610 July 2015 Revisions Date July 2015 Description Initial release

More information

Storage Optimization with Oracle Database 11g

Storage Optimization with Oracle Database 11g Storage Optimization with Oracle Database 11g Terabytes of Data Reduce Storage Costs by Factor of 10x Data Growth Continues to Outpace Budget Growth Rate of Database Growth 1000 800 600 400 200 1998 2000

More information

1. ALMA Pipeline Cluster specification. 2. Compute processing node specification: $26K

1. ALMA Pipeline Cluster specification. 2. Compute processing node specification: $26K 1. ALMA Pipeline Cluster specification The following document describes the recommended hardware for the Chilean based cluster for the ALMA pipeline and local post processing to support early science and

More information

Improving Packet Processing Performance of a Memory- Bounded Application

Improving Packet Processing Performance of a Memory- Bounded Application Improving Packet Processing Performance of a Memory- Bounded Application Jörn Schumacher CERN / University of Paderborn, Germany jorn.schumacher@cern.ch On behalf of the ATLAS FELIX Developer Team LHCb

More information

The Oracle Database Appliance I/O and Performance Architecture

The Oracle Database Appliance I/O and Performance Architecture Simple Reliable Affordable The Oracle Database Appliance I/O and Performance Architecture Tammy Bednar, Sr. Principal Product Manager, ODA 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved.

More information

Guillimin HPC Users Meeting February 11, McGill University / Calcul Québec / Compute Canada Montréal, QC Canada

Guillimin HPC Users Meeting February 11, McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Guillimin HPC Users Meeting February 11, 2016 guillimin@calculquebec.ca McGill University / Calcul Québec / Compute Canada Montréal, QC Canada Compute Canada News Scheduler Updates Software Updates Training

More information

The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed. November 2008

The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed. November 2008 1 The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed November 2008 Extending leadership to the HPC community November 2008 2 Motivation Collaborations Hyperion Cluster Timeline

More information

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage

Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Intel Enterprise Edition Lustre (IEEL-2.3) [DNE-1 enabled] on Dell MD Storage Evaluation of Lustre File System software enhancements for improved Metadata performance Wojciech Turek, Paul Calleja,John

More information

Processing at ebic/diamond Light Source. Alun Ashton

Processing at ebic/diamond Light Source. Alun Ashton Processing at ebic/diamond Light Source Alun Ashton Computing/Software Support Groups Scientific Computing User office Business IT STFC, CCPs, Universities, Collaborators Analysis Acquisition Controls

More information

Data Challenges in Photon Science. Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016

Data Challenges in Photon Science. Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016 Data Challenges in Photon Science Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016 Photon Science > Exploration of tiny samples of nanomaterials > Synchrotrons and free electron lasers generate

More information

Computing Infrastructure for Online Monitoring and Control of High-throughput DAQ Electronics

Computing Infrastructure for Online Monitoring and Control of High-throughput DAQ Electronics Computing Infrastructure for Online Monitoring and Control of High-throughput DAQ S. Chilingaryan, M. Caselle, T. Dritschler, T. Farago, A. Kopmann, U. Stevanovic, M. Vogelgesang Hardware, Software, and

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE

DELL EMC ISILON F800 AND H600 I/O PERFORMANCE DELL EMC ISILON F800 AND H600 I/O PERFORMANCE ABSTRACT This white paper provides F800 and H600 performance data. It is intended for performance-minded administrators of large compute clusters that access

More information

Isilon Performance. Name

Isilon Performance. Name 1 Isilon Performance Name 2 Agenda Architecture Overview Next Generation Hardware Performance Caching Performance Streaming Reads Performance Tuning OneFS Architecture Overview Copyright 2014 EMC Corporation.

More information

Store Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete

Store Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete Store Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete 1 DDN Who We Are 2 We Design, Deploy and Optimize Storage Systems Which Solve HPC, Big Data and Cloud Business

More information

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.

More information

CMS experience with the deployment of Lustre

CMS experience with the deployment of Lustre CMS experience with the deployment of Lustre Lavinia Darlea, on behalf of CMS DAQ Group MIT/DAQ CMS April 12, 2016 1 / 22 Role and requirements CMS DAQ2 System Storage Manager and Transfer System (SMTS)

More information

Apace Systems. Avid Unity Media Offload Solution KIT

Apace Systems. Avid Unity Media Offload Solution KIT Apace Systems Networked Storage for Video Backup 6TB Unity in 8 Hours! Instant restore! WOW!!!! Apace Systems Avid Unity Media Offload Solution KIT Backup / restore / shared storage / expanded access from

More information

HPC Hardware Overview

HPC Hardware Overview HPC Hardware Overview John Lockman III April 19, 2013 Texas Advanced Computing Center The University of Texas at Austin Outline Lonestar Dell blade-based system InfiniBand ( QDR) Intel Processors Longhorn

More information

Building Self-Healing Mass Storage Arrays. for Large Cluster Systems

Building Self-Healing Mass Storage Arrays. for Large Cluster Systems Building Self-Healing Mass Storage Arrays for Large Cluster Systems NSC08, Linköping, 14. October 2008 Toine Beckers tbeckers@datadirectnet.com Agenda Company Overview Balanced I/O Systems MTBF and Availability

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Data Acquisition. Amedeo Perazzo. SLAC, June 9 th 2009 FAC Review. Photon Controls and Data Systems (PCDS) Group. Amedeo Perazzo

Data Acquisition. Amedeo Perazzo. SLAC, June 9 th 2009 FAC Review. Photon Controls and Data Systems (PCDS) Group. Amedeo Perazzo Data Acquisition Photon Controls and Data Systems (PCDS) Group SLAC, June 9 th 2009 FAC Review 1 Data System Architecture Detector specific Photon Control Data Systems (PCDS) L1: Acquisition Beam Line

More information

Cluster Setup and Distributed File System

Cluster Setup and Distributed File System Cluster Setup and Distributed File System R&D Storage for the R&D Storage Group People Involved Gaetano Capasso - INFN-Naples Domenico Del Prete INFN-Naples Diacono Domenico INFN-Bari Donvito Giacinto

More information

Storage for HPC, HPDA and Machine Learning (ML)

Storage for HPC, HPDA and Machine Learning (ML) for HPC, HPDA and Machine Learning (ML) Frank Kraemer, IBM Systems Architect mailto:kraemerf@de.ibm.com IBM Data Management for Autonomous Driving (AD) significantly increase development efficiency by

More information

Database Architecture 2 & Storage. Instructor: Matei Zaharia cs245.stanford.edu

Database Architecture 2 & Storage. Instructor: Matei Zaharia cs245.stanford.edu Database Architecture 2 & Storage Instructor: Matei Zaharia cs245.stanford.edu Summary from Last Time System R mostly matched the architecture of a modern RDBMS» SQL» Many storage & access methods» Cost-based

More information

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science CSD3 The Cambridge Service for Data Driven Discovery A New National HPC Service for Data Intensive science Dr Paul Calleja Director of Research Computing University of Cambridge Problem statement Today

More information

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss

Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft. Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Site Report Presented by Manfred Alef Contributions of Jos van Wezel, Andreas Heiss Grid Computing Centre Karlsruhe (GridKa) Forschungszentrum Karlsruhe Institute for Scientific Computing Hermann-von-Helmholtz-Platz

More information

Oracle EXAM - 1Z Oracle Exadata Database Machine Administration, Software Release 11.x Exam. Buy Full Product

Oracle EXAM - 1Z Oracle Exadata Database Machine Administration, Software Release 11.x Exam. Buy Full Product Oracle EXAM - 1Z0-027 Oracle Exadata Database Machine Administration, Software Release 11.x Exam Buy Full Product http://www.examskey.com/1z0-027.html Examskey Oracle 1Z0-027 exam demo product is here

More information

An Overview of Fujitsu s Lustre Based File System

An Overview of Fujitsu s Lustre Based File System An Overview of Fujitsu s Lustre Based File System Shinji Sumimoto Fujitsu Limited Apr.12 2011 For Maximizing CPU Utilization by Minimizing File IO Overhead Outline Target System Overview Goals of Fujitsu

More information

Isilon Scale Out NAS. Morten Petersen, Senior Systems Engineer, Isilon Division

Isilon Scale Out NAS. Morten Petersen, Senior Systems Engineer, Isilon Division Isilon Scale Out NAS Morten Petersen, Senior Systems Engineer, Isilon Division 1 Agenda Architecture Overview Next Generation Hardware Performance Caching Performance SMB 3 - MultiChannel 2 OneFS Architecture

More information

open source RCP Eclipse based Visualization analysis Python Workflow

open source RCP Eclipse based Visualization analysis Python Workflow An open source not for profit project built on the Eclipse Rich Client Platform (RCP) framework Eclipse based workbench for doing scientific data analysis. It supports: Visualization and analysis of data

More information

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System INSTITUTE FOR PLASMA RESEARCH (An Autonomous Institute of Department of Atomic Energy, Government of India) Near Indira Bridge; Bhat; Gandhinagar-382428; India PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE

More information

BlueGene/L. Computer Science, University of Warwick. Source: IBM

BlueGene/L. Computer Science, University of Warwick. Source: IBM BlueGene/L Source: IBM 1 BlueGene/L networking BlueGene system employs various network types. Central is the torus interconnection network: 3D torus with wrap-around. Each node connects to six neighbours

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

System input-output, performance aspects March 2009 Guy Chesnot

System input-output, performance aspects March 2009 Guy Chesnot Headline in Arial Bold 30pt System input-output, performance aspects March 2009 Guy Chesnot Agenda Data sharing Evolution & current tendencies Performances: obstacles Performances: some results and good

More information

1. Introduction. Outline

1. Introduction. Outline Outline 1. Introduction ALICE computing in Run-1 and Run-2 2. ALICE computing in Run-3 and Run-4 (2021-) 3. Current ALICE O 2 project status 4. T2 site(s) in Japan and network 5. Summary 2 Quark- Gluon

More information

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS

The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS The Wuppertal Tier-2 Center and recent software developments on Job Monitoring for ATLAS DESY Computing Seminar Frank Volkmer, M. Sc. Bergische Universität Wuppertal Introduction Hardware Pleiades Cluster

More information

A ClusterStor update. Torben Kling Petersen, PhD. Principal Architect, HPC

A ClusterStor update. Torben Kling Petersen, PhD. Principal Architect, HPC A ClusterStor update Torben Kling Petersen, PhD Principal Architect, HPC Sonexion (ClusterStor) STILL the fastest file system on the planet!!!! Total system throughput in excess on 1.1 TB/s!! 2 Software

More information

Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov

Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright njwright @ lbl.gov NERSC- National Energy Research Scientific Computing Center Mission: Accelerate the pace of scientific discovery

More information

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011

I Tier-3 di CMS-Italia: stato e prospettive. Hassen Riahi Claudio Grandi Workshop CCR GRID 2011 I Tier-3 di CMS-Italia: stato e prospettive Claudio Grandi Workshop CCR GRID 2011 Outline INFN Perugia Tier-3 R&D Computing centre: activities, storage and batch system CMS services: bottlenecks and workarounds

More information

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011

High-density Grid storage system optimization at ASGC. Shu-Ting Liao ASGC Operation team ISGC 2011 High-density Grid storage system optimization at ASGC Shu-Ting Liao ASGC Operation team ISGC 211 Outline Introduction to ASGC Grid storage system Storage status and issues in 21 Storage optimization Summary

More information

Dell Technologies IoT Solution Surveillance with Genetec Security Center

Dell Technologies IoT Solution Surveillance with Genetec Security Center Dell Technologies IoT Solution Surveillance with Genetec Security Center Surveillance December 2018 H17435 Configuration Best Practices Abstract This guide is intended for internal Dell Technologies personnel

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

Anil Vasudeva IMEX Research. (408)

Anil Vasudeva IMEX Research. (408) Architecting Next Generation Enterprise Network Storage Anil Vasudeva Research (408) 268-0800 vasudeva@imexresearch.com 2000-2004 Research All rights Reserved Reproduction prohibited, without written permission

More information

HPE Scalable Storage with Intel Enterprise Edition for Lustre*

HPE Scalable Storage with Intel Enterprise Edition for Lustre* HPE Scalable Storage with Intel Enterprise Edition for Lustre* HPE Scalable Storage with Intel Enterprise Edition For Lustre* High Performance Storage Solution Meets Demanding I/O requirements Performance

More information

Data Transfers Between LHC Grid Sites Dorian Kcira

Data Transfers Between LHC Grid Sites Dorian Kcira Data Transfers Between LHC Grid Sites Dorian Kcira dkcira@caltech.edu Caltech High Energy Physics Group hep.caltech.edu/cms CERN Site: LHC and the Experiments Large Hadron Collider 27 km circumference

More information

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005

Compact Muon Solenoid: Cyberinfrastructure Solutions. Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Compact Muon Solenoid: Cyberinfrastructure Solutions Ken Bloom UNL Cyberinfrastructure Workshop -- August 15, 2005 Computing Demands CMS must provide computing to handle huge data rates and sizes, and

More information

Improved Solutions for I/O Provisioning and Application Acceleration

Improved Solutions for I/O Provisioning and Application Acceleration 1 Improved Solutions for I/O Provisioning and Application Acceleration August 11, 2015 Jeff Sisilli Sr. Director Product Marketing jsisilli@ddn.com 2 Why Burst Buffer? The Supercomputing Tug-of-War A supercomputer

More information

Darren D Souza ICT Manager, UQ Diamantina Institute

Darren D Souza ICT Manager, UQ Diamantina Institute Darren D Souza ICT Manager, UQ Diamantina Institute Who is UQDI? The University of Queensland Diamantina Institute was established as UQ s sixth research institute in 2007 The Institute was formed through

More information

A COMMON SOFTWARE FRAMEWORK FOR FEL DATA ACQUISITION AND EXPERIMENT MANAGEMENT AT FERMI

A COMMON SOFTWARE FRAMEWORK FOR FEL DATA ACQUISITION AND EXPERIMENT MANAGEMENT AT FERMI A COMMON SOFTWARE FRAMEWORK FOR FEL DATA ACQUISITION AND EXPERIMENT MANAGEMENT AT FERMI R. Borghes, V. Chenda, A. Curri, G. Kourousias, M. Lonza, G. Passos, M. Prica, R. Pugliese 1 FERMI overview FERMI

More information

Feedback on BeeGFS. A Parallel File System for High Performance Computing

Feedback on BeeGFS. A Parallel File System for High Performance Computing Feedback on BeeGFS A Parallel File System for High Performance Computing Philippe Dos Santos et Georges Raseev FR 2764 Fédération de Recherche LUmière MATière December 13 2016 LOGO CNRS LOGO IO December

More information

Introducing Panasas ActiveStor 14

Introducing Panasas ActiveStor 14 Introducing Panasas ActiveStor 14 SUPERIOR PERFORMANCE FOR MIXED FILE SIZE ENVIRONMENTS DEREK BURKE, PANASAS EUROPE INTRODUCTION TO PANASAS Storage that accelerates the world s highest performance and

More information

XAL Status Report Spring, 2008

XAL Status Report Spring, 2008 Spring, 2008 Thomas Pelaia II EPICS Meeting March 14, 2008 What is XAL? Development environment for creating accelerator physics applications, scripts and services Control room applications Analysis applications

More information

<Insert Picture Here> Tape Technologies April 4, 2011

<Insert Picture Here> Tape Technologies April 4, 2011 Tape Technologies April 4, 2011 Gary Francis Sr. Director, Storage Welcome to PASIG 2010 Oracle and/or its affiliates. All rights reserved. Oracle confidential 2 Perhaps you have

More information

Parallel File Systems. John White Lawrence Berkeley National Lab

Parallel File Systems. John White Lawrence Berkeley National Lab Parallel File Systems John White Lawrence Berkeley National Lab Topics Defining a File System Our Specific Case for File Systems Parallel File Systems A Survey of Current Parallel File Systems Implementation

More information

<Insert Picture Here> Exadata Hardware Configurations and Environmental Information

<Insert Picture Here> Exadata Hardware Configurations and Environmental Information Exadata Hardware Configurations and Environmental Information Revised July 1, 2011 Agenda Exadata Hardware Overview Environmental Information Power InfiniBand Network Ethernet Network

More information

Emerging Technologies for HPC Storage

Emerging Technologies for HPC Storage Emerging Technologies for HPC Storage Dr. Wolfgang Mertz CTO EMEA Unstructured Data Solutions June 2018 The very definition of HPC is expanding Blazing Fast Speed Accessibility and flexibility 2 Traditional

More information

DAHA AKILLI BĐR DÜNYA ĐÇĐN BĐLGĐ ALTYAPILARIMIZI DEĞĐŞTĐRECEĞĐZ

DAHA AKILLI BĐR DÜNYA ĐÇĐN BĐLGĐ ALTYAPILARIMIZI DEĞĐŞTĐRECEĞĐZ Information Infrastructure Forum, Istanbul DAHA AKILLI BĐR DÜNYA ĐÇĐN BĐLGĐ ALTYAPILARIMIZI DEĞĐŞTĐRECEĞĐZ 2010 IBM Corporation Information Infrastructure Forum, Istanbul IBM XIV Veri Depolama Sistemleri

More information

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016 Analyzing the High Performance Parallel I/O on LRZ HPC systems Sandra Méndez. HPC Group, LRZ. June 23, 2016 Outline SuperMUC supercomputer User Projects Monitoring Tool I/O Software Stack I/O Analysis

More information

High Performance Storage Solutions

High Performance Storage Solutions November 2006 High Performance Storage Solutions Toine Beckers tbeckers@datadirectnet.com www.datadirectnet.com DataDirect Leadership Established 1988 Technology Company (ASICs, FPGA, Firmware, Software)

More information

Advances of parallel computing. Kirill Bogachev May 2016

Advances of parallel computing. Kirill Bogachev May 2016 Advances of parallel computing Kirill Bogachev May 2016 Demands in Simulations Field development relies more and more on static and dynamic modeling of the reservoirs that has come a long way from being

More information

Managing Terascale Systems and Petascale Data Archives

Managing Terascale Systems and Petascale Data Archives Managing Terascale Systems and Petascale Data Archives February 26, 2010 Tommy Minyard, Ph.D. Director of Advanced Computing Systems Motivation: What s all the high performance computing fuss about? It

More information

Xyratex ClusterStor6000 & OneStor

Xyratex ClusterStor6000 & OneStor Xyratex ClusterStor6000 & OneStor Proseminar Ein-/Ausgabe Stand der Wissenschaft von Tim Reimer Structure OneStor OneStorSP OneStorAP ''Green'' Advancements ClusterStor6000 About Scale-Out Storage Architecture

More information

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions

Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions Dell PowerEdge R720xd with PERC H710P: A Balanced Configuration for Microsoft Exchange 2010 Solutions A comparative analysis with PowerEdge R510 and PERC H700 Global Solutions Engineering Dell Product

More information

The Hadoop Distributed File System Konstantin Shvachko Hairong Kuang Sanjay Radia Robert Chansler

The Hadoop Distributed File System Konstantin Shvachko Hairong Kuang Sanjay Radia Robert Chansler The Hadoop Distributed File System Konstantin Shvachko Hairong Kuang Sanjay Radia Robert Chansler MSST 10 Hadoop in Perspective Hadoop scales computation capacity, storage capacity, and I/O bandwidth by

More information

Cold Storage: The Road to Enterprise Ilya Kuznetsov YADRO

Cold Storage: The Road to Enterprise Ilya Kuznetsov YADRO Cold Storage: The Road to Enterprise Ilya Kuznetsov YADRO Agenda Technical challenge Custom product Growth of aspirations Enterprise requirements Making an enterprise cold storage product 2 Technical Challenge

More information

IBM Spectrum Scale IO performance

IBM Spectrum Scale IO performance IBM Spectrum Scale 5.0.0 IO performance Silverton Consulting, Inc. StorInt Briefing 2 Introduction High-performance computing (HPC) and scientific computing are in a constant state of transition. Artificial

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Data oriented job submission scheme for the PHENIX user analysis in CCJ Tomoaki Nakamura, Hideto En yo, Takashi Ichihara, Yasushi Watanabe and Satoshi Yokkaichi RIKEN Nishina Center for Accelerator-Based

More information

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties

Netherlands Institute for Radio Astronomy. May 18th, 2009 Hanno Holties Netherlands Institute for Radio Astronomy Update LOFAR Long Term Archive May 18th, 2009 Hanno Holties LOFAR Long Term Archive (LTA) Update Status Architecture Data Management Integration LOFAR, Target,

More information

Managing Research Data for Diverse Scientific Experiments

Managing Research Data for Diverse Scientific Experiments Managing Research Data for Diverse Scientific Experiments Erica Yang erica.yang@stfc.ac.uk Scientific Computing Department STFC Rutherford Appleton Laboratory Crystallographic Information and Data Management

More information

ZEST Snapshot Service. A Highly Parallel Production File System by the PSC Advanced Systems Group Pittsburgh Supercomputing Center 1

ZEST Snapshot Service. A Highly Parallel Production File System by the PSC Advanced Systems Group Pittsburgh Supercomputing Center 1 ZEST Snapshot Service A Highly Parallel Production File System by the PSC Advanced Systems Group Pittsburgh Supercomputing Center 1 Design Motivation To optimize science utilization of the machine Maximize

More information

IBM řešení pro větší efektivitu ve správě dat - Store more with less

IBM řešení pro větší efektivitu ve správě dat - Store more with less IBM řešení pro větší efektivitu ve správě dat - Store more with less IDG StorageWorld 2012 Rudolf Hruška Information Infrastructure Leader IBM Systems & Technology Group rudolf_hruska@cz.ibm.com IBM Agenda

More information

Cluster Network Products

Cluster Network Products Cluster Network Products Cluster interconnects include, among others: Gigabit Ethernet Myrinet Quadrics InfiniBand 1 Interconnects in Top500 list 11/2009 2 Interconnects in Top500 list 11/2008 3 Cluster

More information

GW2000h w/gw175h/q F1 specifications

GW2000h w/gw175h/q F1 specifications Product overview The Gateway GW2000h w/ GW175h/q F1 maximizes computing power and thermal control with up to four hot-pluggable nodes in a space-saving 2U form factor. Offering first-class performance,

More information

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies

Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies Benefits of 25, 40, and 50GbE Networks for Ceph and Hyper- Converged Infrastructure John F. Kim Mellanox Technologies Storage Transitions Change Network Needs Software Defined Storage Flash Storage Storage

More information

Data, Data, Everywhere. We are now in the Big Data Era.

Data, Data, Everywhere. We are now in the Big Data Era. Data, Data, Everywhere. We are now in the Big Data Era. CONTENTS Background Big Data What is Generating our Big Data Physical Management of Big Data Optimisation in Data Processing Techniques for Handling

More information

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI

Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI Analisi Tier2 e Tier3 Esperienze ai Tier-2 Giacinto Donvito INFN-BARI outlook Alice Examples Atlas Examples CMS Examples Alice Examples ALICE Tier-2s at the moment do not support interactive analysis not

More information

Magellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009

Magellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009 Magellan Project Jeff Broughton NERSC Systems Department Head October 7, 2009 1 Magellan Background National Energy Research Scientific Computing Center (NERSC) Argonne Leadership Computing Facility (ALCF)

More information

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research Computer Science Section Computational and Information Systems Laboratory National Center for Atmospheric Research My work in the context of TDD/CSS/ReSET Polynya new research computing environment Polynya

More information

HPC Saudi Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences. Presented to: March 14, 2017

HPC Saudi Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences. Presented to: March 14, 2017 Creating an Exascale Ecosystem for Science Presented to: HPC Saudi 2017 Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences March 14, 2017 ORNL is managed by UT-Battelle

More information

Grid Computing Competence Center Large Scale Computing Infrastructures (MINF 4526 HS2011)

Grid Computing Competence Center Large Scale Computing Infrastructures (MINF 4526 HS2011) Grid Computing Competence Center Large Scale Computing Infrastructures (MINF 4526 HS2011) Sergio Maffioletti Grid Computing Competence Centre, University of Zurich http://www.gc3.uzh.ch/

More information

Architecting Storage for Semiconductor Design: Manufacturing Preparation

Architecting Storage for Semiconductor Design: Manufacturing Preparation White Paper Architecting Storage for Semiconductor Design: Manufacturing Preparation March 2012 WP-7157 EXECUTIVE SUMMARY The manufacturing preparation phase of semiconductor design especially mask data

More information

HIGH-PERFORMANCE STORAGE WORKFLOWS WITH DATABASE-DRIVEN PRODUCTION MANAGEMENT

HIGH-PERFORMANCE STORAGE WORKFLOWS WITH DATABASE-DRIVEN PRODUCTION MANAGEMENT WHITE PAPER HIGH-PERFORMANCE STORAGE WORKFLOWS WITH DATABASE-DRIVEN PRODUCTION MANAGEMENT Reference Architectures for Quantum StorNext Environments Featuring Cinegy Software CONTENTS Abstract... 3 Introduction...

More information

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018

Cloud Computing and Hadoop Distributed File System. UCSB CS170, Spring 2018 Cloud Computing and Hadoop Distributed File System UCSB CS70, Spring 08 Cluster Computing Motivations Large-scale data processing on clusters Scan 000 TB on node @ 00 MB/s = days Scan on 000-node cluster

More information