High-speed data ingest and analysis for photon science at DESY with IBM Spectrum Scale.

Size: px
Start display at page:

Download "High-speed data ingest and analysis for photon science at DESY with IBM Spectrum Scale."

Transcription

1 High-speed data ingest and analysis for photon science at DESY with IBM Spectrum Scale. Martin Gasthuber, Stefan Dietrich, Manuela Kuhn, Uwe Ensslin, Janusz Malka / DESY

2 DESY Where, Why and What > founded December 1959 > Research Topics > 2 Sites Accelerator Research/Build/Operate Particle Physics (HEP) > i.e. Higgs@LHC Photon Science > i.e. X-ray crystallography, looking at viruses to van Gogh and fuel cells Astro Particle Physics Hamburg Hamburg / main site Zeuthen (close to Berlin) > ~2300 employes, >3000 guest scientist anually Zeuthen Martin Gasthuber SC16 15/11/2016 Page 2

3 PETRA III > storage ring accelerator 2.3km circumference > X-ray radiation source > since 2009: 14 beamlines in operation > starting February 2014: shutdown for new extension 10 additional beamlines Sample raw file Beamline P11 Bio-Imaging and diffraction Martin Gasthuber SC16 15/11/2016 Page 3

4 PETRA III Extension > extension for PETRA III > 2 new experiment halls > 10 new beamlines > bigger and faster detectors... > in operational since April 2015 Martin Gasthuber SC16 15/11/2016 Page 4

5 Shooting crystals... Martin Gasthuber SC16 15/11/2016 Page 5

6 Shooting crystals... Martin Gasthuber SC16 15/11/2016 Page 6

7 New Challenges > New detectors achieve higher data rates: Pilatus 300k: 1,2 MB 200 Hz Pilatus 6M: 25 MB 25 Hz 7 MB 100 Hz PCO Edge: 8 MB 100Hz PerkinElmer: 16 MB Byte 15 Hz Lambda: 60 Gb/s, 2000 Hz file rate Eiger: 30 Gb/s, 2000 Hz file rate > Old storage system hit limits > New storage system has to be installed during PETRA III shutdown! Martin Gasthuber SC16 15/11/2016 Page 7

8 Requirements for New Storage System > High performance for single clients > 1GB/s > Handle data peaks (Burst Buffer) Data-acquisition has bursty nature First measurement, change sample, second measurement and so on Duration: minutes, hours, days > Protection between beamlines Competition, data must not be readable from every PC Data must not be readable by next scientific group using the beamline Data-acquisition at one beamline should not interfere with other beamlines Martin Gasthuber SC16 15/11/2016 Page 8

9 Limitations > datacenter is ~1 km away > low space in experiment hall and at beamline local storage is no option > 10 Gigabit Ethernet available (only) > mix of operating systems and ages: Windows Multiple Linux distributions and ages Sometimes unsupported versions > shared accounts for data-acquisition per beamline > very heterogeneous environment: tech, social, requirements > time, personal and money is limited as usual ;-) Martin Gasthuber SC16 15/11/2016 Page 9

10 We are not alone... Martin Gasthuber SC16 15/11/2016 Page 10

11 but time is not on our side. Martin Gasthuber SC16 15/11/2016 Page 11

12 DESY & IBM Collaboration > Collaboration with IBM within scope of SPEED project > Timeline for SPEED project: June > March 2015 > For DESY: Access to experts from development, research and support > 6 people from DESY, 5+ from IBM > Solution based on IBM Spectrum Scale (GPFS) and IBM Elastic Storage Server (ESS) GPFS ESS supports GPFS Native RAID > Initial invest: 1x GSS24 (232x 3TB NLSAS) > Loan (Beta HW): 1x ESS GL4 (232x 4TB NLSAS) 1x ESS GS1 (24x 400GB SSD) > So far excellent performance and stability (more than expected!) ESS GS1 Martin Gasthuber SC16 15/11/2016 Page 12 ESS GL4/GSS24

13 Logical Dataflow From the cradle to the grave Keep copy of data for several days/weeks Keep copy of data for several months Martin Gasthuber SC16 15/11/2016 Page 13

14 Logical Building Blocks Key Components 4x GPFS clusters - Looks like one cluster from the user point of view - Provides separation for admin purposes 2x GPFS file system - Provides isolation for data taking (beamline file system) and long-term analysis (core file system) 2x IBM ESS GS1 - Metadata core file system - Metadata beamline file system - Tier0 beamline file system - Having two ESS improves high availability for data taking 2x IBM ESS GL4 - Tier1 beamline file system - Data core filesystem - Convert GSS24 to ESS GL4 Additional nodes in access cluster and beamline cluster Analysis cluster already exists All nodes run GPFS All nodes are connected via InfiniBand Martin Gasthuber SC16 15/11/2016 Page 14

15 physical Layout Detector prop. Link Proxy Nodes Control Server Detector prop. Link Control Server up to 24 Detectors Beamline Nodes 10GE Access Layer Total: 160 Gb/s (320 max.) P3 DC ~1km RTT: ~0,24 ms Proxy Nodes Proxy Nodes Proxy Nodes Initial 7 Proxy Nodes Infiniband (56 Gbit FDR) 2x Switches 2x GPFS Fabrics Analysis Cluster GS1 NSD ESS GL6 NSD ESS GL4 NSD DESY Network GS1 NSD P3 Hall Datacenter Martin Gasthuber SC16 15/11/2016 Page 15

16 Access Protocols > Proxy nodes export GPFS filesystem via NFS + SMB + ZeroMQ > Beamline NFSv3 (Kernel) SMB, based on Samba 4.2 ZeroMQ > ZeroMQ: Messaging library > Core Available for multiple languages Multiple message patterns available (PUSH/PULL, REQ/REPLY) One-way tunnel from detector to GPFS vacuum cleaner for data ;-) NFSv3 (NFSv4.1/pNFS with Ganesha awaited) SMB, based on Samba 4.2 Native GPFS Martin Gasthuber SC16 15/11/2016 Page 16

17 Beamline Filesystem > Wild-West area for beamline > Only host based authentication, no ACLs > Access through NFSv3, SMB or ZeroMQ > Optimized for performance 1 MiB filesystem blocksize Pre-optimized NFSv3: ~60 MB/s NFSv3: ~600 MB/s SMB: ~ MB/s 'carve out' low capacity & large # of spindles > Tiered Storage Detector prop. Link Control Server Beamline Nodes Access through NFS/SMB/0MQ GS1 NSD Proxy Nodes Migration between GPFS Pools via PolicyRun GL6/GL4 NSD MD Prx. Nodes Native GPFS Beamline Filesystem Tier 0: SSD burst buffer (< 10 TB) Migration after short period of time Tier 1: ~80 TB capacity Martin Gasthuber SC16 15/11/2016 Page 17

18 Core Filesystem > Ordered World > Full user authentication > NFSv4 ACLs (only) > Access through NFSv3, SMB or native GPFS (dominant) > GPFS Policy Runs copy data Beamline Core Filesystem Analysis Nodes Proxy Nodes Beamline Filesystem GS1 NSD MD Migration between GPFS Pools GL6/GL4 NSD GPFS Policy Run creates copy Single UID/GID ACL inheritance gets active Raw data set to immutable > 8 MiB filesystem blocksize > Fileset per beamtime GL6/GL4 NSD Core Filesystem GS1 NSD MD > XATTR used for data flow steering (policy run) Martin Gasthuber SC16 15/11/2016 Page 18

19 Activities since April 2015 Martin Gasthuber SC16 15/11/2016 Page 19

20 European XFEL - a leading new research facility The European XFEL (X-Ray Free-Electron Laser) is a research facility under construction which will use high intensity X-ray light to help scientists better understand the nature of matter. > Location: Schenefeld and Hamburg, Germany > User facility with 280 staff (+ 230 from DESY) > 2017 start of user operation Schenefeld site at the start of user operation Martin Gasthuber SC16 15/11/2016 Page 20

21 EuXFEL - participants > Organized as a non-profit corporation in 2009 with the mission of design, construction, operation, and development of the freeelectron laser > Supported by 11 partner countries > Germany (federal government, city-state of Hamburg, and state of Schleswig-Holstein) covers 58% of the costs; Russia contributes 27%; each of the other international shareholders 1 3% > Total budget for construction (including commissioning) 1.22 billion at 2005 prices 600 M contributed in cash, over 550 M as in-kind contributions (mainly manufacture of parts for the facility) Martin Gasthuber SC16 15/11/2016 Page 21

22 Facility overview Facility overview Schenefeld Osdorfer Born DESY-Bahrenfeld Experiment hall Laboratories Offices Electron beam to photon beamlines Undulator systems begin Electron source Linear accelerator begins Martin Gasthuber SC16 15/11/2016 Page 22

23 from electron to coherent x-ray Martin Gasthuber SC16 15/11/2016 Page 23

24 DAQ Challenges Readout rate driven by bunch structure 10 Hz train of pulses 4.5 MHz pulses in train ( pulses) Data volume driven by detector type Detector type Sampling Data/pulse Data/train Data/sec 1 channel digitizer 5 GS/s ~2 kb ~6 MB ~60 MB 1 Mpxl 2D camera 4.5 MHz ~2 MB ~1 GB ~10 GB 4 Mpxl 2D camera 4.5 MHz ~8 MB ~3 GB ~30 GB* - volume depends on detector type and pulses per train - 1-N trains per file -> 1GB file or larger * Limited by AGIPD detector internal pipeline depth (352 img/sec), hence factor 3 compare to LPD 1MPx Martin Gasthuber SC16 15/11/2016 Page 24

25 How to cope with that? Standardize detector to DAQ interfaces Multiple 10GE network links to receive data from detector Standard data transfer protocols Standard data formats (HDF5) Include software based computing capability into the DAQ chain Data receiving, aggregation, reduction, formatting Enable bad quality data rejection Provide real time overview of collected data e.g. compute statistics, visualize data Provide highly optimized infrastructure and resources for data recording close to the experiment station Dedicated network for DAQ Distributed storage systems with controlled/restricted access HPC systems for demanding storage GPFS on ESS systems Martin Gasthuber SC16 15/11/2016 Page 25

26 DAQ data flow and processing Martin Gasthuber SC16 15/11/2016 Page 26

27 infrastructure locations SASE3 SASE1 SASE2 4 computer rooms in the experiment hall (red, a.k.a. balcony rooms) Dedicated rack rooms for the instruments (orange) Martin Gasthuber SC16 15/11/2016 Page 27

28 data flow more abstract Detector Train Builder PC-Layer (16-64 Nodes) Online Storage Online FS Offline FS Scratch FS InfiniBand MetroX (FDR/FDR-10) Offline Storage Home FS Per SASE, initially 2, finally 3 10GE UDP + Jumbo Frames Online Cluster ~ 4 km Distance ~22µs latency RDMA enabled Scratch FS shared for all SASE Offline Cluster Train Builder Reshuffles picture modules into whole picture Pictures shuffled in trains Sends single trains per channel PC-Layer Data analysis for monitoring Data Reduction, e.g. FPGA based compression Veto File creation in memory and online filesystem every node creates a 1GB HDF5 file every 1.6s Online Cluster nodes Online data analysis and re-calibration Transfer Online Offline Storage Evaluation: multiple or stretched cluster Evaluation: GPFS AFM or custom scripts Offline Storage Shared across experimental stations (SASE) Data arrives after delay, stored on GPFS Copy data to dcache (tape copy, export) ACLs Raw data access only from dcache Offline cluster stores calibrated data on GPFS User analysis from calibrated data Martin Gasthuber SC16 15/11/2016 Page 29

29 even higher altitude looking at rates two beamlines Martin Gasthuber SC16 15/11/2016 Page 30

30 Quality of Service control IOPS spent profile Martin Gasthuber SC16 15/11/2016 Page 33

31 challenges to continue on bandwidth optimization ingest from detector (the 30GBs detector beamline) highest priority offline storage to dcache feed calibration process and copy to tape preserving GPFS NFSv4 ACLs control user access largely non-predictable (QOS) prove fault tolerance site failure, link failure (Ethernet/InfiniBand) all flash for online storage looks economically feasible (0.5 PB per unit) performance figures under investigation should help a lot to get chaotic user access silently merged save inter-tier copy overhead homogeneous event driven data migration (instead/complementing of policy run) cluster wide inotify (L)ight (W)eigth (E)vent Martin Gasthuber SC16 15/11/2016 Page 34

32 EOF Martin Gasthuber SC16 15/11/2016 Page 35

Data Challenges in Photon Science. Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016

Data Challenges in Photon Science. Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016 Data Challenges in Photon Science Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016 Photon Science > Exploration of tiny samples of nanomaterials > Synchrotrons and free electron lasers generate

More information

An ESS implementation in a Tier 1 HPC Centre

An ESS implementation in a Tier 1 HPC Centre An ESS implementation in a Tier 1 HPC Centre Maximising Performance - the NeSI Experience José Higino (NeSI Platforms and NIWA, HPC Systems Engineer) Outline What is NeSI? The National Platforms Framework

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

Storage for HPC, HPDA and Machine Learning (ML)

Storage for HPC, HPDA and Machine Learning (ML) for HPC, HPDA and Machine Learning (ML) Frank Kraemer, IBM Systems Architect mailto:kraemerf@de.ibm.com IBM Data Management for Autonomous Driving (AD) significantly increase development efficiency by

More information

Lustre architecture for Riccardo Veraldi for the LCLS IT Team

Lustre architecture for Riccardo Veraldi for the LCLS IT Team Lustre architecture for LCLS@SLAC Riccardo Veraldi for the LCLS IT Team 2 LCLS Experimental Floor 3 LCLS Parameters 4 LCLS Physics LCLS has already had a significant impact on many areas of science, including:

More information

Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads

Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads Next-Generation NVMe-Native Parallel Filesystem for Accelerating HPC Workloads Liran Zvibel CEO, Co-founder WekaIO @liranzvibel 1 WekaIO Matrix: Full-featured and Flexible Public or Private S3 Compatible

More information

A COMMON SOFTWARE FRAMEWORK FOR FEL DATA ACQUISITION AND EXPERIMENT MANAGEMENT AT FERMI

A COMMON SOFTWARE FRAMEWORK FOR FEL DATA ACQUISITION AND EXPERIMENT MANAGEMENT AT FERMI A COMMON SOFTWARE FRAMEWORK FOR FEL DATA ACQUISITION AND EXPERIMENT MANAGEMENT AT FERMI R. Borghes, V. Chenda, A. Curri, G. Kourousias, M. Lonza, G. Passos, M. Prica, R. Pugliese 1 FERMI overview FERMI

More information

Experiment Control Upgrades at DESY

Experiment Control Upgrades at DESY Experiment Control Upgrades at DESY Teresa Núñez DESY Photon Science PiLC Logic Controller ADQ412 Digitizer Diffractometer in Sardana GPFS storage system Tango Meeting ONERA, 21-06-16 PiLC Logic Controller

More information

Scientific Data Policy of European X-Ray Free-Electron Laser Facility GmbH

Scientific Data Policy of European X-Ray Free-Electron Laser Facility GmbH 6 June 2017 Scientific Data Policy of European X-Ray Free-Electron Laser Facility GmbH as approved by the Council on 30 June 2017 1 Preface... 2 2 Definitions... 2 3 General principles... 5 4 Raw data

More information

Online Data Analysis at European XFEL

Online Data Analysis at European XFEL Online Data Analysis at European XFEL Hans Fangohr Control and Analysis Software Group Senior Data Analysis Scientist DESY, 25 January 2018 2 Outline Introduction & European XFEL status Overview online

More information

DESY site report. HEPiX Spring 2016 at DESY. Yves Kemp, Peter van der Reest. Zeuthen,

DESY site report. HEPiX Spring 2016 at DESY. Yves Kemp, Peter van der Reest. Zeuthen, DESY site report HEPiX Spring 2016 at DESY Yves Kemp, Peter van der Reest Zeuthen, 2016-04-18 Accelerators news > XFEL: 1.3.2016: All segments of first light-generating system installed in European XFEL

More information

GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations

GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations Argonne National Laboratory Argonne National Laboratory is located on 1,500

More information

Control, data acquisition, management and analysis

Control, data acquisition, management and analysis Control, data acquisition, management and analysis Thomas M. Baumann Scientific Instrument SQS Hans Fangohr Control & Analysis Software Krzysztof Wrona IT & Data Management SQS Early User Workshop Schenefeld,

More information

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF

Conference The Data Challenges of the LHC. Reda Tafirout, TRIUMF Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment

More information

Nexus Builder Developing a Graphical User Interface to create NeXus files

Nexus Builder Developing a Graphical User Interface to create NeXus files Nexus Builder Developing a Graphical User Interface to create NeXus files Lilit Grigoryan, Yerevan State University, Armenia September 9, 2014 Abstract This report describes a project which main purpose

More information

Data Movement & Tiering with DMF 7

Data Movement & Tiering with DMF 7 Data Movement & Tiering with DMF 7 Kirill Malkin Director of Engineering April 2019 Why Move or Tier Data? We wish we could keep everything in DRAM, but It s volatile It s expensive Data in Memory 2 Why

More information

Einführung von VoIP am DESY. Kars Ohrenberg IT

Einführung von VoIP am DESY. Kars Ohrenberg IT Einführung von VoIP am DESY Kars Ohrenberg IT Overview DESY The IP Network @ DESY Telephony @ DESY VoIP Installation, Design and Experience Summary 13th, 2008 2 Deutsches Elektronen- Synchrotron DESY National

More information

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM

The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM. Lukas Nellen ICN-UNAM The creation of a Tier-1 Data Center for the ALICE experiment in the UNAM Lukas Nellen ICN-UNAM lukas@nucleares.unam.mx 3rd BigData BigNetworks Conference Puerto Vallarta April 23, 2015 Who Am I? ALICE

More information

Accelerating Spectrum Scale with a Intelligent IO Manager

Accelerating Spectrum Scale with a Intelligent IO Manager Accelerating Spectrum Scale with a Intelligent IO Manager Ray Coetzee Pre-Sales Architect Seagate Systems Group, HPC 2017 Seagate, Inc. All Rights Reserved. 1 ClusterStor: Lustre, Spectrum Scale and Object

More information

The Leading Parallel Cluster File System

The Leading Parallel Cluster File System The Leading Parallel Cluster File System www.thinkparq.com www.beegfs.io ABOUT BEEGFS What is BeeGFS BeeGFS (formerly FhGFS) is the leading parallel cluster file system, developed with a strong focus on

More information

The CMS Computing Model

The CMS Computing Model The CMS Computing Model Dorian Kcira California Institute of Technology SuperComputing 2009 November 14-20 2009, Portland, OR CERN s Large Hadron Collider 5000+ Physicists/Engineers 300+ Institutes 70+

More information

Storage and I/O requirements of the LHC experiments

Storage and I/O requirements of the LHC experiments Storage and I/O requirements of the LHC experiments Sverre Jarp CERN openlab, IT Dept where the Web was born 22 June 2006 OpenFabrics Workshop, Paris 1 Briefly about CERN 22 June 2006 OpenFabrics Workshop,

More information

Computing and Networking at Diamond Light Source. Mark Heron Head of Control Systems

Computing and Networking at Diamond Light Source. Mark Heron Head of Control Systems Computing and Networking at Diamond Light Source Mark Heron Head of Control Systems Harwell Science and Innovation Campus ISIS (Spallation Neutron Source) Central Laser Facility LHC Tier 1 computing Research

More information

Emerging Technologies for HPC Storage

Emerging Technologies for HPC Storage Emerging Technologies for HPC Storage Dr. Wolfgang Mertz CTO EMEA Unstructured Data Solutions June 2018 The very definition of HPC is expanding Blazing Fast Speed Accessibility and flexibility 2 Traditional

More information

IBM Spectrum Scale IO performance

IBM Spectrum Scale IO performance IBM Spectrum Scale 5.0.0 IO performance Silverton Consulting, Inc. StorInt Briefing 2 Introduction High-performance computing (HPC) and scientific computing are in a constant state of transition. Artificial

More information

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016 Analyzing the High Performance Parallel I/O on LRZ HPC systems Sandra Méndez. HPC Group, LRZ. June 23, 2016 Outline SuperMUC supercomputer User Projects Monitoring Tool I/O Software Stack I/O Analysis

More information

DAQ system at SACLA and future plan for SPring-8-II

DAQ system at SACLA and future plan for SPring-8-II DAQ system at SACLA and future plan for SPring-8-II Takaki Hatsui T. Kameshima, Nakajima T. Abe, T. Sugimoto Y. Joti, M.Yamaga RIKEN SPring-8 Center IFDEPS 1 Evolution of Computing infrastructure from

More information

Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA

Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA Mitsuhiro YAMAGA JASRI Oct.11, 2011 @ICALEPCS2011 Contents: Introduction Data Acquisition

More information

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research Computer Science Section Computational and Information Systems Laboratory National Center for Atmospheric Research My work in the context of TDD/CSS/ReSET Polynya new research computing environment Polynya

More information

Data Acquisition. Amedeo Perazzo. SLAC, June 9 th 2009 FAC Review. Photon Controls and Data Systems (PCDS) Group. Amedeo Perazzo

Data Acquisition. Amedeo Perazzo. SLAC, June 9 th 2009 FAC Review. Photon Controls and Data Systems (PCDS) Group. Amedeo Perazzo Data Acquisition Photon Controls and Data Systems (PCDS) Group SLAC, June 9 th 2009 FAC Review 1 Data System Architecture Detector specific Photon Control Data Systems (PCDS) L1: Acquisition Beam Line

More information

FEL diagnostics and control system

FEL diagnostics and control system FEL diagnostics and control system Thomas M. Baumann WP-85, Scientific Instrument SQS Instrument Scientist Satellite meeting Soft X-ray instruments SQS and SCS Hamburg, 24.01.2017 2 Outline FEL diagnostics

More information

SERGEY STEPANOV. Argonne National Laboratory, Advanced Photon Source, Lemont, IL, USA. October 2017, ICALEPCS-2017, Barcelona, Spain

SERGEY STEPANOV. Argonne National Laboratory, Advanced Photon Source, Lemont, IL, USA. October 2017, ICALEPCS-2017, Barcelona, Spain BEAMLINE AND EXPERIMENT AUTOMATIONS FOR THE GENERAL MEDICAL SCIENCES AND CANCER INSTITUTES STRUCTURAL BIOLOGY FACILITY AT THE ADVANCED PHOTON SOURCE (GM/CA@APS) SERGEY STEPANOV Argonne National Laboratory,

More information

IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage

IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage IBM Spectrum NAS, IBM Spectrum Scale and IBM Cloud Object Storage Silverton Consulting, Inc. StorInt Briefing 2017 SILVERTON CONSULTING, INC. ALL RIGHTS RESERVED Page 2 Introduction Unstructured data has

More information

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support

Data Management. Parallel Filesystems. Dr David Henty HPC Training and Support Data Management Dr David Henty HPC Training and Support d.henty@epcc.ed.ac.uk +44 131 650 5960 Overview Lecture will cover Why is IO difficult Why is parallel IO even worse Lustre GPFS Performance on ARCHER

More information

New Storage Technologies First Impressions: SanDisk IF150 & Intel Omni-Path. Brian Marshall GPFS UG - SC16 November 13, 2016

New Storage Technologies First Impressions: SanDisk IF150 & Intel Omni-Path. Brian Marshall GPFS UG - SC16 November 13, 2016 New Storage Technologies First Impressions: SanDisk IF150 & Intel Omni-Path Brian Marshall GPFS UG - SC16 November 13, 2016 Presenter Background Brian Marshall Computational Scientist at Virginia Tech

More information

SSRS-4 and the CREMLIN follow up project

SSRS-4 and the CREMLIN follow up project SSRS-4 and the CREMLIN follow up project Towards elaborating a plan for the future collaboration Martin Sandhop SSRS-4 and the CREMLIN follow up project www.cremlin.eu CREMLIN WP5 Workshop: "Towards a

More information

Store Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete

Store Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete Store Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete 1 DDN Who We Are 2 We Design, Deploy and Optimize Storage Systems Which Solve HPC, Big Data and Cloud Business

More information

Hardware Aspects, Modularity and Integration of an Event Mode Data Acquisition and Instrument Control for the European Spallation Source (ESS)

Hardware Aspects, Modularity and Integration of an Event Mode Data Acquisition and Instrument Control for the European Spallation Source (ESS) Hardware Aspects, Modularity and Integration of an Event Mode Data Acquisition and Instrument Control for the European Spallation Source (ESS) T Gahl 1,5, M Hagen 1, R Hall-Wilton 1,2, S Kolya 1, M Koennecke

More information

Parallel Storage Systems for Large-Scale Machines

Parallel Storage Systems for Large-Scale Machines Parallel Storage Systems for Large-Scale Machines Doctoral Showcase Christos FILIPPIDIS (cfjs@outlook.com) Department of Informatics and Telecommunications, National and Kapodistrian University of Athens

More information

Block Storage Service: Status and Performance

Block Storage Service: Status and Performance Block Storage Service: Status and Performance Dan van der Ster, IT-DSS, 6 June 2014 Summary This memo summarizes the current status of the Ceph block storage service as it is used for OpenStack Cinder

More information

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads

IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads 89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report

More information

Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5

Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5 TECHNOLOGY BRIEF Xcellis Technical Overview: A deep dive into the latest hardware designed for StorNext 5 ABSTRACT Xcellis represents the culmination of over 15 years of file system and data management

More information

<Insert Picture Here> Oracle Storage

<Insert Picture Here> Oracle Storage Oracle Storage Jennifer Feng Principal Product Manager IT Challenges Have Not Slowed Increasing Demand for Storage Capacity and Performance 2010 New Digital Data ( Replicated (¼ Created,

More information

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008"

Spanish Tier-2. Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) F. Matorras N.Colino, Spain CMS T2,.6 March 2008 Spanish Tier-2 Francisco Matorras (IFCA) Nicanor Colino (CIEMAT) Introduction Report here the status of the federated T2 for CMS basically corresponding to the budget 2006-2007 concentrate on last year

More information

TINE Video System. A Modular, Well-Defined, Component-Based and Interoperable TV System. Proceedings On Redesign VSv3

TINE Video System. A Modular, Well-Defined, Component-Based and Interoperable TV System. Proceedings On Redesign VSv3 TINE Video System A Modular, Well-Defined, Component-Based and Interoperable TV System Proceedings On Redesign VSv3 Stefan Weisse, David Melkumyan, Philip Duval DESY, Germany Where That Comes From: PITZ

More information

Challenges in Storage

Challenges in Storage Challenges in Storage or : A random walk Presented by Patrick Fuhrmann with contributions from many experts. Contributions and thoughts provided by Oxana Smirnova, NeIC, Lund Markus Schulz, CERN IT Steven

More information

Parallel File Systems. John White Lawrence Berkeley National Lab

Parallel File Systems. John White Lawrence Berkeley National Lab Parallel File Systems John White Lawrence Berkeley National Lab Topics Defining a File System Our Specific Case for File Systems Parallel File Systems A Survey of Current Parallel File Systems Implementation

More information

Detectors for Future Light Sources. Gerhard Grübel Deutsches Elektronen Synchrotron (DESY) Notke-Str. 85, Hamburg

Detectors for Future Light Sources. Gerhard Grübel Deutsches Elektronen Synchrotron (DESY) Notke-Str. 85, Hamburg Detectors for Future Light Sources Gerhard Grübel Deutsches Elektronen Synchrotron (DESY) Notke-Str. 85, 22607 Hamburg Overview Radiation from X-Ray Free Electron lasers (XFEL, LCLS) Ultrafast detectors

More information

Feedback on BeeGFS. A Parallel File System for High Performance Computing

Feedback on BeeGFS. A Parallel File System for High Performance Computing Feedback on BeeGFS A Parallel File System for High Performance Computing Philippe Dos Santos et Georges Raseev FR 2764 Fédération de Recherche LUmière MATière December 13 2016 LOGO CNRS LOGO IO December

More information

Designing Next Generation FS for NVMe and NVMe-oF

Designing Next Generation FS for NVMe and NVMe-oF Designing Next Generation FS for NVMe and NVMe-oF Liran Zvibel CTO, Co-founder Weka.IO @liranzvibel Santa Clara, CA 1 Designing Next Generation FS for NVMe and NVMe-oF Liran Zvibel CTO, Co-founder Weka.IO

More information

HTCondor with KRB/AFS Setup and first experiences on the DESY interactive batch farm

HTCondor with KRB/AFS Setup and first experiences on the DESY interactive batch farm HTCondor with KRB/AFS Setup and first experiences on the DESY interactive batch farm Beyer Christoph & Finnern Thomas Madison (Wisconsin), May 2018 HTCondor week The Team and the Outline The Team Outline

More information

10GE network tests with UDP. Janusz Szuba European XFEL

10GE network tests with UDP. Janusz Szuba European XFEL 10GE network tests with UDP Janusz Szuba European XFEL Outline 2 Overview of initial DAQ architecture Slice test hardware specification Initial networking test results DAQ software UDP tests Summary 10GE

More information

Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov

Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright njwright @ lbl.gov NERSC- National Energy Research Scientific Computing Center Mission: Accelerate the pace of scientific discovery

More information

Improved Solutions for I/O Provisioning and Application Acceleration

Improved Solutions for I/O Provisioning and Application Acceleration 1 Improved Solutions for I/O Provisioning and Application Acceleration August 11, 2015 Jeff Sisilli Sr. Director Product Marketing jsisilli@ddn.com 2 Why Burst Buffer? The Supercomputing Tug-of-War A supercomputer

More information

Monte Carlo Production on the Grid by the H1 Collaboration

Monte Carlo Production on the Grid by the H1 Collaboration Journal of Physics: Conference Series Monte Carlo Production on the Grid by the H1 Collaboration To cite this article: E Bystritskaya et al 2012 J. Phys.: Conf. Ser. 396 032067 Recent citations - Monitoring

More information

Benoit DELAUNAY Benoit DELAUNAY 1

Benoit DELAUNAY Benoit DELAUNAY 1 Benoit DELAUNAY 20091023 Benoit DELAUNAY 1 CC-IN2P3 provides computing and storage for the 4 LHC experiments and many others (astro particles...) A long history of service sharing between experiments Some

More information

Stefan Koestner on behalf of the LHCb Online Group ( IEEE - Nuclear Science Symposium San Diego, Oct.

Stefan Koestner on behalf of the LHCb Online Group (  IEEE - Nuclear Science Symposium San Diego, Oct. Stefan Koestner on behalf of the LHCb Online Group (email: Stefan.Koestner@cern.ch) IEEE - Nuclear Science Symposium San Diego, Oct. 31 st 2006 Dedicated to B-physics : single arm forward spectrometer

More information

Detector Control LHC

Detector Control LHC Detector Control Systems @ LHC Matthias Richter Department of Physics, University of Oslo IRTG Lecture week Autumn 2012 Oct 18 2012 M. Richter (UiO) DCS @ LHC Oct 09 2012 1 / 39 Detectors in High Energy

More information

Experiences using a multi-tiered GPFS file system at Mount Sinai. Bhupender Thakur Patricia Kovatch Francesca Tartagliogne Dansha Jiang

Experiences using a multi-tiered GPFS file system at Mount Sinai. Bhupender Thakur Patricia Kovatch Francesca Tartagliogne Dansha Jiang Experiences using a multi-tiered GPFS file system at Mount Sinai Bhupender Thakur Patricia Kovatch Francesca Tartagliogne Dansha Jiang Outline 1. Storage summary 2. Planning and Migration 3. Challenges

More information

vsan Mixed Workloads First Published On: Last Updated On:

vsan Mixed Workloads First Published On: Last Updated On: First Published On: 03-05-2018 Last Updated On: 03-05-2018 1 1. Mixed Workloads on HCI 1.1.Solution Overview Table of Contents 2 1. Mixed Workloads on HCI 3 1.1 Solution Overview Eliminate the Complexity

More information

Advanced Photon Source Data Management. S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES)

Advanced Photon Source Data Management. S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES) Advanced Photon Source Data Management S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES) APS Data Management - Globus World 2018 Growing Beamline Data Needs X-ray detector capabilities

More information

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide

IBM V7000 Unified R1.4.2 Asynchronous Replication Performance Reference Guide V7 Unified Asynchronous Replication Performance Reference Guide IBM V7 Unified R1.4.2 Asynchronous Replication Performance Reference Guide Document Version 1. SONAS / V7 Unified Asynchronous Replication

More information

The RAMDISK Storage Accelerator

The RAMDISK Storage Accelerator The RAMDISK Storage Accelerator A Method of Accelerating I/O Performance on HPC Systems Using RAMDISKs Tim Wickberg, Christopher D. Carothers wickbt@rpi.edu, chrisc@cs.rpi.edu Rensselaer Polytechnic Institute

More information

Comissioning of the IR beamline at FLASH. DESY.DE

Comissioning of the IR beamline at FLASH. DESY.DE Comissioning of the IR beamline at FLASH Michael.Gensch @ DESY.DE IR Undulator Beamline at FLASH First light 2 FLASH experimental hall before installation of IR beamline BL1 PG2 BL2 BL3 fs Laser 3 FLASH

More information

GPFS for Life Sciences at NERSC

GPFS for Life Sciences at NERSC GPFS for Life Sciences at NERSC A NERSC & JGI collaborative effort Jason Hick, Rei Lee, Ravi Cheema, and Kjiersten Fagnan GPFS User Group meeting May 20, 2015-1 - Overview of Bioinformatics - 2 - A High-level

More information

NFS around the world Tigran Mkrtchyan for dcache Team dcache User Workshop, Umeå, Sweden

NFS around the world Tigran Mkrtchyan for dcache Team dcache User Workshop, Umeå, Sweden NFS around the world Tigran Mkrtchyan for dcache Team dcache User Workshop, Umeå, Sweden The NFS community History v1 1984, SUN Microsystems intern 16 ops, 1:1 mapping to vfs 1986 First Connectathon! v2

More information

Parallel File Systems for HPC

Parallel File Systems for HPC Introduction to Scuola Internazionale Superiore di Studi Avanzati Trieste November 2008 Advanced School in High Performance and Grid Computing Outline 1 The Need for 2 The File System 3 Cluster & A typical

More information

Summary of the LHC Computing Review

Summary of the LHC Computing Review Summary of the LHC Computing Review http://lhc-computing-review-public.web.cern.ch John Harvey CERN/EP May 10 th, 2001 LHCb Collaboration Meeting The Scale Data taking rate : 50,100, 200 Hz (ALICE, ATLAS-CMS,

More information

CRFS: A Lightweight User-Level Filesystem for Generic Checkpoint/Restart

CRFS: A Lightweight User-Level Filesystem for Generic Checkpoint/Restart CRFS: A Lightweight User-Level Filesystem for Generic Checkpoint/Restart Xiangyong Ouyang, Raghunath Rajachandrasekar, Xavier Besseron, Hao Wang, Jian Huang, Dhabaleswar K. Panda Department of Computer

More information

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science CSD3 The Cambridge Service for Data Driven Discovery A New National HPC Service for Data Intensive science Dr Paul Calleja Director of Research Computing University of Cambridge Problem statement Today

More information

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions

Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions Mellanox InfiniBand Solutions Accelerate Oracle s Data Center and Cloud Solutions Providing Superior Server and Storage Performance, Efficiency and Return on Investment As Announced and Demonstrated at

More information

OpenStack SwiftOnFile: User Identity for Cross Protocol Access Demystified Dean Hildebrand, Sasikanth Eda Sandeep Patil, Bill Owen IBM

OpenStack SwiftOnFile: User Identity for Cross Protocol Access Demystified Dean Hildebrand, Sasikanth Eda Sandeep Patil, Bill Owen IBM OpenStack SwiftOnFile: User Identity for Cross Protocol Access Demystified Dean Hildebrand, Sasikanth Eda Sandeep Patil, Bill Owen IBM 2015 Storage Developer Conference. Insert Your Company Name. All Rights

More information

HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS

HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS OVERVIEW When storage demands and budget constraints collide, discovery suffers. And it s a growing problem. Driven by ever-increasing performance and

More information

SAP HANA IBM x3850 X6

SAP HANA IBM x3850 X6 SAP HANA IBM x3850 X6 Miklos Farkas SAP HANA IBM x3850 X6 IBM Workload Optimized Solution for SAP HANA appliance Applications Data Center Ready SUSE SAP HANA GPFS FPO functionality OS SUSE Linux Enterprise

More information

DDN s Vision for the Future of Lustre LUG2015 Robert Triendl

DDN s Vision for the Future of Lustre LUG2015 Robert Triendl DDN s Vision for the Future of Lustre LUG2015 Robert Triendl 3 Topics 1. The Changing Markets for Lustre 2. A Vision for Lustre that isn t Exascale 3. Building Lustre for the Future 4. Peak vs. Operational

More information

Update on PRad GEMs, Readout Electronics & DAQ

Update on PRad GEMs, Readout Electronics & DAQ Update on PRad GEMs, Readout Electronics & DAQ Kondo Gnanvo University of Virginia, Charlottesville, VA Outline PRad GEMs update Upgrade of SRS electronics Integration into JLab DAQ system Cosmic tests

More information

SC Series: Performance Best Practices. Brad Spratt Performance Engineering Midrange & Entry Solutions

SC Series: Performance Best Practices. Brad Spratt Performance Engineering Midrange & Entry Solutions SC Series: Performance Best Practices Brad Spratt Performance Engineering Midrange & Entry Solutions What s New with the SC Series New Dell EMC SC5020 Hybrid Array Optimized for Economics and Efficiency

More information

Red Hat Gluster Storage performance. Manoj Pillai and Ben England Performance Engineering June 25, 2015

Red Hat Gluster Storage performance. Manoj Pillai and Ben England Performance Engineering June 25, 2015 Red Hat Gluster Storage performance Manoj Pillai and Ben England Performance Engineering June 25, 2015 RDMA Erasure Coding NFS-Ganesha New or improved features (in last year) Snapshots SSD support Erasure

More information

Proof of Concept TRANSPARENT CLOUD TIERING WITH IBM SPECTRUM SCALE

Proof of Concept TRANSPARENT CLOUD TIERING WITH IBM SPECTRUM SCALE Proof of Concept TRANSPARENT CLOUD TIERING WITH IBM SPECTRUM SCALE ATS Innovation Center, Malvern PA Joshua Kwedar The ATS Group October November 2017 INTRODUCTION With the release of IBM Spectrum Scale

More information

Storage Evaluations at BNL

Storage Evaluations at BNL Storage Evaluations at BNL HEPiX at DESY Spring 2007 Robert Petkus RHIC/USATLAS Computing Facility Brookhaven National Laboratory State of Affairs Explosive disk storage growth trajectory over the next

More information

2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System

2008 JINST 3 S Online System. Chapter System decomposition and architecture. 8.2 Data Acquisition System Chapter 8 Online System The task of the Online system is to ensure the transfer of data from the front-end electronics to permanent storage under known and controlled conditions. This includes not only

More information

Shared Parallel Filesystems in Heterogeneous Linux Multi-Cluster Environments

Shared Parallel Filesystems in Heterogeneous Linux Multi-Cluster Environments LCI HPC Revolution 2005 26 April 2005 Shared Parallel Filesystems in Heterogeneous Linux Multi-Cluster Environments Matthew Woitaszek matthew.woitaszek@colorado.edu Collaborators Organizations National

More information

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW

Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW Database Services at CERN with Oracle 10g RAC and ASM on Commodity HW UKOUG RAC SIG Meeting London, October 24 th, 2006 Luca Canali, CERN IT CH-1211 LCGenève 23 Outline Oracle at CERN Architecture of CERN

More information

GPFS on a Cray XT. Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009

GPFS on a Cray XT. Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009 GPFS on a Cray XT Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009 Outline NERSC Global File System GPFS Overview Comparison of Lustre and GPFS

More information

AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT

AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT AUTOMATING IBM SPECTRUM SCALE CLUSTER BUILDS IN AWS PROOF OF CONCEPT By Joshua Kwedar Sr. Systems Engineer By Steve Horan Cloud Architect ATS Innovation Center, Malvern, PA Dates: Oct December 2017 INTRODUCTION

More information

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction.

Hall D and IT. at Internal Review of IT in the 12 GeV Era. Mark M. Ito. May 20, Hall D. Hall D and IT. M. Ito. Introduction. at Internal Review of IT in the 12 GeV Era Mark Hall D May 20, 2011 Hall D in a Nutshell search for exotic mesons in the 1.5 to 2.0 GeV region 12 GeV electron beam coherent bremsstrahlung photon beam coherent

More information

Extraordinary HPC file system solutions at KIT

Extraordinary HPC file system solutions at KIT Extraordinary HPC file system solutions at KIT Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State Roland of Baden-Württemberg Laifer Lustre and tools for ldiskfs investigation

More information

System Management and Infrastructure

System Management and Infrastructure System Management and Infrastructure WP 28 Accelerator Controls Conceptual Design Report CDR Meeting December, 14 th 2009, DESY - Tim Wilksen System Management 2 Hardware Management Management xtca Systems

More information

PaNSIG (PaNData), and the interactions between SB-IG and PaNSIG

PaNSIG (PaNData), and the interactions between SB-IG and PaNSIG PaNSIG (PaNData), and the interactions between SB-IG and PaNSIG Erica Yang erica.yang@stfc.ac.uk Scientific Computing Department STFC Rutherford Appleton Laboratory Structural Biology IG 27 March 2014

More information

Grid Computing a new tool for science

Grid Computing a new tool for science Grid Computing a new tool for science CERN, the European Organization for Nuclear Research Dr. Wolfgang von Rüden Wolfgang von Rüden, CERN, IT Department Grid Computing July 2006 CERN stands for over 50

More information

IME (Infinite Memory Engine) Extreme Application Acceleration & Highly Efficient I/O Provisioning

IME (Infinite Memory Engine) Extreme Application Acceleration & Highly Efficient I/O Provisioning IME (Infinite Memory Engine) Extreme Application Acceleration & Highly Efficient I/O Provisioning September 22 nd 2015 Tommaso Cecchi 2 What is IME? This breakthrough, software defined storage application

More information

Mission-Critical Lustre at Santos. Adam Fox, Lustre User Group 2016

Mission-Critical Lustre at Santos. Adam Fox, Lustre User Group 2016 Mission-Critical Lustre at Santos Adam Fox, Lustre User Group 2016 About Santos One of the leading oil and gas producers in APAC Founded in 1954 South Australia Northern Territory Oil Search Cooper Basin

More information

Global Collaboration on Accelerator Operations and Experiments

Global Collaboration on Accelerator Operations and Experiments Global Collaboration on Accelerator Operations and Experiments Globalization in the Financial World Has a bad taste. Socializing risk? Privatizing win? in the HEP Community Is key to build the next big

More information

Pocket: Elastic Ephemeral Storage for Serverless Analytics

Pocket: Elastic Ephemeral Storage for Serverless Analytics Pocket: Elastic Ephemeral Storage for Serverless Analytics Ana Klimovic*, Yawen Wang*, Patrick Stuedi +, Animesh Trivedi +, Jonas Pfefferle +, Christos Kozyrakis* *Stanford University, + IBM Research 1

More information

Improving Packet Processing Performance of a Memory- Bounded Application

Improving Packet Processing Performance of a Memory- Bounded Application Improving Packet Processing Performance of a Memory- Bounded Application Jörn Schumacher CERN / University of Paderborn, Germany jorn.schumacher@cern.ch On behalf of the ATLAS FELIX Developer Team LHCb

More information

University at Buffalo Center for Computational Research

University at Buffalo Center for Computational Research University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support

More information

Extremely Fast Distributed Storage for Cloud Service Providers

Extremely Fast Distributed Storage for Cloud Service Providers Solution brief Intel Storage Builders StorPool Storage Intel SSD DC S3510 Series Intel Xeon Processor E3 and E5 Families Intel Ethernet Converged Network Adapter X710 Family Extremely Fast Distributed

More information

Data storage services at KEK/CRC -- status and plan

Data storage services at KEK/CRC -- status and plan Data storage services at KEK/CRC -- status and plan KEK/CRC Hiroyuki Matsunaga Most of the slides are prepared by Koichi Murakami and Go Iwai KEKCC System Overview KEKCC (Central Computing System) The

More information

MAHA. - Supercomputing System for Bioinformatics

MAHA. - Supercomputing System for Bioinformatics MAHA - Supercomputing System for Bioinformatics - 2013.01.29 Outline 1. MAHA HW 2. MAHA SW 3. MAHA Storage System 2 ETRI HPC R&D Area - Overview Research area Computing HW MAHA System HW - Rpeak : 0.3

More information

MID: Materials Imaging and Dynamics Instrument

MID: Materials Imaging and Dynamics Instrument MID: Materials Imaging and Dynamics Instrument A. Madsen 1,*, J. Hallmann 1, T. Roth 1, G. Ansaldi 1, W. Lu 1,2 1 European XFEL 2 Technische Universität Berlin * anders.madsen@xfel.eu XFEL User Meeting

More information