PaBdataODI-8.6 Deliverable: D8.6. PaN-data ODI. Deliverable D8.6

Size: px
Start display at page:

Download "PaBdataODI-8.6 Deliverable: D8.6. PaN-data ODI. Deliverable D8.6"

Transcription

1 PaN-data ODI Deliverable D8.6 Draft: D8.6: Evaluation of coupling of prototype to multi-core architectures (Month 36 - October 2014) Grant Agreement Number Project Title RI PaN-data Open Data Infrastructure Title of Deliverable D8.6 Evaluation of coupling of prototype to multi-core architectures (Month 36 October 2014) Deliverable Number D8.6 Lead Beneficiary Deliverable Dissemination Level Deliverable Nature Contractual Delivery Date STFC Public Report 01 March 2014 (Month 30) Actual Delivery Date November 2014 The PaN-data ODI project is partly funded by the European Commission under the 7th Framework Programme, Information Society Technologies, Research Infrastructures. Page 1 of 15

2 Abstract Reports D8.1 to D8.5 have detailed much of the work necessary to support parallel and high speed writing and reading of Hierarchical Data Files (hdf) and some of the resulting software and applications have been used already in the D5.3 Virtual laboratories workpackage and report. The main focus of this report is to examine in more detail how this applies to tomography and reconstruction, one of the most resource hungry scientific processes and in great demand. Many of the initial requirements and those that have emerged in the previous scalabilty reports but since D8.5 there has been significant progress in the optimiing the use of computing clusters and high performance file systems and in the collaboration with the hdf5 group in the U.S. Keyword list PANData ODI, Scalability Document approval Approved for submission to EC by all partners on xx.xx.xx Revision history Issue Author(s) Date Description 1.0 Bill Pulford 10 October 2014 Initial Version 1.1 Diamond co-workers 13 November 2014 Complete version for discussion Acknowledgements: Jon Thompson (DLS), Ulrik Pedersen (DLS), Mark Basham (DLS), Frederik Ferner (DLS), Nick Rees (DLS), Heiner Billich (PSI/SLS), Frank Schluenen (DESY), Ka Wanelik (DLS) and the HDF group. Page 2 of 15

3 Table of contents 1. Introduction Scope of the report A brief description of the tomographic process The Tomographic Acquisition Process Schematic The Data Analysis Process The production of sinograms (the Radon Transform) Visualiing the reconstruction Acquisition and the reconstruction with different file systems Introduction Case 1 Low resolution Case 2 High resolution Some conclusions from the above Update on Hierarchical Data File Format (HDF5) support Software available: Introduction The PANData ODI project sets out to optimie coordination between research groups working at one or more different large experimental facilities across Europe and with the potential of expanding its scope across the scientific world. There are a number of components to the project such as common authentication, application software and federated searchable data storage systems. This report relates to a joint research activity, Work Package 8 Scalability, which concerns standardiation of file formats and research to identify supporting data storage architectures to optimie speeds and data storage capacity. The timeline for this workpackage: D8.1: Definition of phdf5 capable Nexus implementation Software Report Delivered Aug 2012 D8.2: Evaluation of Parallel file systems and MPI I/O implementations - Report Delivered Aug 2012 D8.3: Implementation of pnexus and MPI I/O on parallel file systems - D8.4: Examination of Distributed parallel file system - Month 21 June 2013 D8.5: Demonstrate capabilities on selected applications - Month 21 June 2013 o A demonstration application is distributed and is in daily use by many users and at a number of European facilities see DAWNScience. D8.6: Evaluation of coupling of prototype to multi-core architectures (Month 30 March 2014) - Report - To be delivered - This report Page 3 of 15

4 1.1. Scope of the report Reports D8.1 to D8.5 have detailed much of the work necessary to support parallel and high speed writing and reading of Hierarchical Data Files (HDF). Further work has been done since the delivery of the D8.5 report particularly in the exploitation of the applications, computing resources and supporting libraries and, after discussions with the project manager, it was decided to review this in more detail. Included are examinations of the performance of computing clusters and advanced parallel data storage available to DLS together when applied to the processes involved in tomography data acquisition and reconstruction. There are a number of scientific disciplines that can benefit by exploiting the enhanced technology relevant to the workpackage 8 including: a) Macromacromolecular Crystallography (MX) b) Spectroscopy c) Scattering such as Wide and Small Angle (WAXS and SAXS) d) Data from X-ray Free Electron Lasers. (XFEL) e) Tomography Tomography acquisition and reconstruction was selected for the report as the processes involved are probably easier to understand than most and link directly with the computing resources. 2. A brief description of the tomographic process. The scientific details of the tomographic process are covered in many publications (1,2,3) and it is only intended here to describe how this contributes to the high data volumes and consequent computer processing aspects covered by this report The Tomographic Acquisition Process Schematic. Page 4 of 15

5 Figure 1 The tomographic acquisition process The schematic above illustrates the tomography acquisition process and indicates its relative comprehensibility with respect to the underlying computing. The acquisition process essentially provides a microscope capable of producing a three dimensional reconstruction of the sample. These samples can be of many types ranging from biological cells to metallic objects. The X-ray beam is incident on the rotating sample that projects images via suitable optics onto the detector. The sample is scanned vertically in steps varying from micrometres to millimetres, depending on the sample, to give a stack of images that can be transformed with resource intensive applications to reconstruct a 3-dimensional image. The detectors typically used at DLS are detailed in Table 1. Of these the PCO Edge, although smaller, has a much higher repetition rate during an experiment and provides a serious challenge to the data rates and volumes needing to be supported by the file systems. The resolution of the resulting 3-d image is governed by the vertical scan step sie (-axis) and the sampling of the pixels in the x and y axes of the CCD camera. There are frequently two steps in the acquisition: a) The vertical step sie is increased and the sampling frequency of the image is reduced. This permits the rapid production of low resolution 3-d images for evaluation purposes. b) Full acquisition where the -axis scan steps are optimied for high resolution and all pixels of the detector are used. (Often the vertical scan ranges with the PCO 4000 and the PCO Edge being reduced from 6000 to 4000 and 2600 to 1800 respectively for practical purposes. Detector Image Sie Scan Sie Collection Time PCO PCO Edge 4000x bit pixel grayscale 2600x bit pixel grayscale ~80 GB data ~30 minutes ~32 GB ~2 minutes Table 1: The principal detectors used at the Diamond Light Source for Tomography and commonly used in other facilities. Most large facilities will run experiments that involve the use of these resource intensive detectors in parallel and it is the challenge to the infrastructure and file systems to support the multiplication in bulk input and output requirements. Moreover data processing and analysis frequently is done while data are being taken and being transferred to storage and archive; this results in additional concurrent read and write operations. 1 PCO 4000 and PCO Edge are high speed cameras developed and supplied by PCO AG,Donaupark 11, Kelheim, Germany Page 5 of 15

6 3. The Data Analysis Process 3.1. The production of sinograms (the Radon Transform) Image Stack (A) Operation Sinograms (B) Sinograms (B) The orthogonal slices of the image stack are transformed using the standard algorithm into a stack of sinograms of equal data volume to the original image stack. Notes: a) This transformation is high paralleliable and thus conducive to the scalability architecture. The sinogram stack is used as the base data to produce the reconstruction. The reconstruction algorithm used varies according to the sample under investigation and the conditions of the image data acquisition. Nevertheless the processing is normally also highly paralleliable. Reconstruction (C) Figure 2 The basic process of converting an acquired image stack to a reconstructed image The data acquisition process for tomography varies for each facility but is overall represented by the diagram below. The common components include the detector (often a PCO 4000 or Edge), its controlling hardware and firmware (EPICS Area detector, Lima or other), the acquisition software (GDA, SPEC or other) and the association of high performance data storage and computing clusters. The schematic below is based on that found at DLS. Major focus points of this report are: The high capacity data flow from the detector to the data storage. (process A) The initial processing to sinograms using the software - based directly in the detector controller at DLS - (process B) The use of the computing clusters to perform the 3-d reconstruction of the image from the sinograms. (process C) Page 6 of 15

7 The evaluation image processing used actually during the data acquisition and described in 2.1.a tends not to be hugely resource hungry and can be performed by lower performance hardware. The full reconstruction of 2.1.b provides the primary challenge to the file-system and the associated cluster hardware. Figure 3 A schematic diagram of acquisition and analysis architecture at DLS Page 7 of 15

8 The cluster resources The computing cluster hardware required to support the above processes tends to have similarities across most facilities; at DLS it consists of the following: Name Nodes CPUs CPU Total cores COM Intel Xeon E Intel Xeon E5420 Clock speed GH GH RAM Accelerators Network 32GB 16GB NVIDIA Tesla S1070 GPUs (2 per node) 1Gb/s Ethernet 1Gb/s Ethernet COM Intel Xeon E GH 32GB NVIDIA Tesla S1070 GPUs (2 per node) 1Gb/s Ethernet 2 2 Intel Xeon E GH 48GB 1Gb/s Ethernet 37 2 Intel Xeon X GH 24GB 1Gb/s Ethernet QDR Infiniband 20 2 Intel Xeon E GH 32GB 1Gb/s Ethernet QDR Infiniband COM Intel Xeon X GH 48GB NVIDIA Tesla M2090 GPUs (2 per node) 2Gb/s Ethernet QDR Infiniband 40 2 Intel Xeon E v GH 128GB 1Gb/s Ethernet FDR Infiniband Overall this comprises ~2000 cores of variants of x86 with ~80 nvidia GPU s. Page 8 of 15

9 Storage hardware The procurement of storage hardware changes mainly stimulated by the ever increasing volumes of acquired data. DLS currently has 4 beamline storage systems supporting ~ 30 beamlines. Identity File System Sie Details GPFS01 GPFS 1 Petabyte DDN SFA12K Lustre03 Lustre 0.5 Petabyte DDN SFA10K Lustre01 Lustre 0.5 Petabyte DDN S2A9900 Commodity XFS Sum to ~1 Petabyte Note: This report considers systems 1-3 (GPFS, Lustre03 and Lustre01). The XFS systems support those beamlines without the requirements for high performance computing Visualiing the reconstruction The final stage of a tomography experiment is the creation and visualiation of the reconstructed image. There are a number of software suites that provide this functionality, the ideal case being software that can read from the HDF5 (NeXuS) created by the Area Detector during the data collection process. At DLS we use a python pipeline which reads directly from the HDF5 files and allows standard reconstruction routines to process the data in a parallel fashion. The DAWN package[1] provides a convenient graphical user interface to this pipeline and is used frequently during the data acquisition process to evaluate data as soon as it has been collected on the beamline. The core requirement of the high performance file system and cluster resources is that either portions of or the entire reconstruction should be complete and visible as soon as possible after the scan to enable future data collections to be steered by the data collected. Although DAWN allows the data to be visualied in various ways, there are always the requirements for dedicated and specialied volumetric data analysis and visualiation tools; this requirement is filled at DLS with the use of the Avio Package ( Other other open source and commercial packages are available. Page 9 of 15

10 Figure 4 - An example of the DAWN Tomography GUI being used to preview slices from a data collection, and set up the parameters for a full reconstruction, in this case the view is of a salt solution droplet, with growing bubbles, and a salt crystal shown in this frame. Figure 5 - The full reconstructed volume being viewed by DAWN using its Tiff Stack visualisation Options, slicing across the entire dataset, to view the crystal of salt, solution and support from the side. Page 10 of 15

11 Figure 6 - The same volume as rendered by AVIZO and clearly showing the salt crystals and Bubbles in the salt solution. 4. Acquisition and the reconstruction with different file systems 4.1. Introduction The evolution and procurement of hardware is rapid consequently the intention of this section is not to produce definitive results for the operations required using different file storage and reconstruction architectures. It is rather to provide some underlying information concerning the technology in action and some hard won observations that may be useful for other facilities and valuable for decisions to be made for subsequent projects. This work is being done at DLS but uses technology that could be readily available at a cost and should be able to be run at collaborating facilities. At this point DLS has the compute clusters and data file storage systems described in section 3. This has the inbuilt advantage enabling a comparison of the properties of these commonly used high performance systems. The testing involved using the same sample and data processing in each case but switching the supporting computer hardware and data storage technology. There are two cases considered that correspond to low resolution and high resolution reconstruction (2.1.a and 2.1.b as above). Each test concerns two major factors that influence the performance of the operations: a) File copy speed, a measure of the data storage system and network technology. b) The reconstruction time, influenced by the processor performance. Page 11 of 15

12 Duration (s) Duration (s) PaBdataODI-8.6 Deliverable: D8.6 c) In each case the evaluation was done on the basis 10 sequentially run cloned processes for the above. (see the horiontal axes on each plot) In each test the data storage technology (Lustre01, Lustre03, GPFS01) is identified directly on the diagram Case 1 Low resolution Parameters: Detector = PCO Edge, Cluster = COM07,Resolution = 2560 x 2160 (111 scan steps), NXSie = 128k, Total file sie = 1.4Gb, Job scheduler = Univa Grid Engine (UGE) Times for copying 1.4G file GPFS01 lustre01 lustre Reconstruction Duration (Low Resolution) GPFS01 lustre01 lustre03 Page 12 of 15

13 Duration (s) Duration (s) PaBdataODI-8.6 Deliverable: D Case 2 High resolution Parameters: Detector = PCO Edge, Cluster = COM07,Resolution = 2560 x 2160 (3651 scan steps), NXSie = 848k, Total file sie = 39Gb, Job scheduler = Univa Grid Engine (UGE) Times for copying 39G file GPFS01 Lustre01 Lustre Reconstruction Duration (High Resolution) GPFS01 Lustre01 Lustre Page 13 of 15

14 4.2. Some conclusions from the above. General: The use of the latest HDF5 format and associated NeXuS metadata instead of the original TIFF image stacks have led to at least a100% improvement in performance. Technical: o An additional benefit of applying the NeXuS model is the potential to use single files that include everything necessary for further processing. The details of the necessary processing must be considered. An example is that I/O to a parallel HDF5 file (phdf) is not necessarily very efficient for low levels of parallelism mainly due to the time and resources needed during setup the processing pipelines. It has been found that only beyond a parallelism of 5 does using the parallel architecture result in rapid increases in performance that outweigh the setup time. There are strong arguments for using MPI for cluster jobs as it found to be both more portable and likely to exist on all clusters. o The current reconstruction software starts with an HDF5 file uses parallel read from many cluster nodes and separate processes using batch processing cluster management software UNIVA ( The combination of the separation of files and the properties of the cluster management software result in a stack of files, normally TIFF images. It would be highly desirable to write from these separate processes to HDF5 file. o The use of MPI and phdf5 with the new pipeline enable this and consequently greatly simplify processes such as archiving and management. There are technical issues associated with our HPC file systems that can result in inconsistent performance levels. This implies that there must be tight control over configuration management; we have started to use Jenkins ( for this purpose. Contributory issues observed include: o Operating System upgrades, hardware changes and software changes) can lead do hard to understand degradation of performance. o Performance can often depend on various factors such as current occupation level of the file system. Page 14 of 15

15 5. Update on Hierarchical Data File Format (HDF5) support. There have been some important developments of the phdf5 libraries since the previous D8.5 report, of these one of the most important has been the release of support for Single Writer and Multiple Reader (SWMR) for phdf files. These were not directly funded by PANData ODI but are deemed to be of fundamental importance to subsequent research by supplying a stable and freely available solution for high performance computing. The funded developments were all performed by the HDF5 group at the request of the facilities and should be integrated into the HDF release tree. Mar 2013: Feasibility study (Funded by Diamond: $67k with an estimated cost to complete the developments current foreseen: $344k) Aug 2013: SWMR internal library changes (Funded by Diamond: $103k) Oct 2013: SWMR API changes (Funded by Dectris: $38k) Nov 2013: HDF5 SWMR test infrastructure (Funded by ESRF: $60k) The current state of HDF5 development at DLS may be followed on the externally available web site: 6. Software available: - h5python, a version of python optimied to access HDF5 files and allow the use of additional tools such as numpy. A single write and multiple reader (SWMR) test application cbflib -> NeXus - contains a NeXuS data writer - a high performance library for directly reading and writing NeXuS files. o o The detailed control of the detector is delegated to plugins within the EPICS area detector architecture; the plugins are normally written in C or C++. The parallel hdf5writer is currently tailored to EPICS/Diamond requirements, however this is only superficial. The intention would be to abstract it out and publish it on our external website. The main issue is to abstract the TCP protocol from detector system to phdf5writer. Given the plugin code it should be relatively straightforward to integrate into the LIMA architecture (Lima.blissgarden.org/applications/tango/doc/index.html) Page 15 of 15

PaN-data ODI Deliverable: D8.3. PaN-data ODI. Deliverable D8.3. Draft: D8.3: Implementation of pnexus and MPI I/O on parallel file systems (Month 21)

PaN-data ODI Deliverable: D8.3. PaN-data ODI. Deliverable D8.3. Draft: D8.3: Implementation of pnexus and MPI I/O on parallel file systems (Month 21) PaN-data ODI Deliverable D8.3 Draft: D8.3: Implementation of pnexus and MPI I/O on parallel file systems (Month 21) Grant Agreement Number Project Title RI-283556 PaN-data Open Data Infrastructure Title

More information

X-ray imaging software tools for HPC clusters and the Cloud

X-ray imaging software tools for HPC clusters and the Cloud X-ray imaging software tools for HPC clusters and the Cloud Darren Thompson Application Support Specialist 9 October 2012 IM&T ADVANCED SCIENTIFIC COMPUTING NeAT Remote CT & visualisation project Aim:

More information

Managing Research Data for Diverse Scientific Experiments

Managing Research Data for Diverse Scientific Experiments Managing Research Data for Diverse Scientific Experiments Erica Yang erica.yang@stfc.ac.uk Scientific Computing Department STFC Rutherford Appleton Laboratory Crystallographic Information and Data Management

More information

PaNSIG (PaNData), and the interactions between SB-IG and PaNSIG

PaNSIG (PaNData), and the interactions between SB-IG and PaNSIG PaNSIG (PaNData), and the interactions between SB-IG and PaNSIG Erica Yang erica.yang@stfc.ac.uk Scientific Computing Department STFC Rutherford Appleton Laboratory Structural Biology IG 27 March 2014

More information

PaNdata-ODI Deliverable: D4.4. PaNdata-ODI. Deliverable D4.4. Benchmark of performance of the metadata catalogue

PaNdata-ODI Deliverable: D4.4. PaNdata-ODI. Deliverable D4.4. Benchmark of performance of the metadata catalogue PaNdata-ODI Deliverable D4.4 Benchmark of performance of the metadata catalogue Grant Agreement Number Project Title Title of Deliverable RI-283556 PaNdata Open Data Infrastructure Benchmark of performance

More information

Computing and Networking at Diamond Light Source. Mark Heron Head of Control Systems

Computing and Networking at Diamond Light Source. Mark Heron Head of Control Systems Computing and Networking at Diamond Light Source Mark Heron Head of Control Systems Harwell Science and Innovation Campus ISIS (Spallation Neutron Source) Central Laser Facility LHC Tier 1 computing Research

More information

Requirements for data catalogues within facilities

Requirements for data catalogues within facilities Requirements for data catalogues within facilities Milan Prica 1, George Kourousias 1, Alistair Mills 2, Brian Matthews 2 1 Sincrotrone Trieste S.C.p.A, Trieste, Italy 2 Scientific Computing Department,

More information

SLHC-PP DELIVERABLE REPORT EU DELIVERABLE: Document identifier: SLHC-PP-D v1.1. End of Month 03 (June 2008) 30/06/2008

SLHC-PP DELIVERABLE REPORT EU DELIVERABLE: Document identifier: SLHC-PP-D v1.1. End of Month 03 (June 2008) 30/06/2008 SLHC-PP DELIVERABLE REPORT EU DELIVERABLE: 1.2.1 Document identifier: Contractual Date of Delivery to the EC Actual Date of Delivery to the EC End of Month 03 (June 2008) 30/06/2008 Document date: 27/06/2008

More information

Metadata Models for Experimental Science Data Management

Metadata Models for Experimental Science Data Management Metadata Models for Experimental Science Data Management Brian Matthews Facilities Programme Manager Scientific Computing Department, STFC Co-Chair RDA Photon and Neutron Science Interest Group Task lead,

More information

Advanced Photon Source Data Management. S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES)

Advanced Photon Source Data Management. S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES) Advanced Photon Source Data Management S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES) APS Data Management - Globus World 2018 Growing Beamline Data Needs X-ray detector capabilities

More information

X-TRACT: software for simulation and reconstruction of X-ray phase-contrast CT

X-TRACT: software for simulation and reconstruction of X-ray phase-contrast CT X-TRACT: software for simulation and reconstruction of X-ray phase-contrast CT T.E.Gureyev, Ya.I.Nesterets, S.C.Mayo, A.W.Stevenson, D.M.Paganin, G.R.Myers and S.W.Wilkins CSIRO Materials Science and Engineering

More information

Scientific Data Policy of European X-Ray Free-Electron Laser Facility GmbH

Scientific Data Policy of European X-Ray Free-Electron Laser Facility GmbH 6 June 2017 Scientific Data Policy of European X-Ray Free-Electron Laser Facility GmbH as approved by the Council on 30 June 2017 1 Preface... 2 2 Definitions... 2 3 General principles... 5 4 Raw data

More information

DAQ system at SACLA and future plan for SPring-8-II

DAQ system at SACLA and future plan for SPring-8-II DAQ system at SACLA and future plan for SPring-8-II Takaki Hatsui T. Kameshima, Nakajima T. Abe, T. Sugimoto Y. Joti, M.Yamaga RIKEN SPring-8 Center IFDEPS 1 Evolution of Computing infrastructure from

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

The Photon and Neutron Data Initiative PaN-data

The Photon and Neutron Data Initiative PaN-data The Photon and Neutron Data Initiative PaN-data Why? With whom? When? What? Slide: 1 ICALEPCS-2011 PaN-data 13-Oct-2011 IT is transforming the practice of science Science is increasingly computational,

More information

The Center for High Performance Computing. Dell Breakfast Events 20 th June 2016 Happy Sithole

The Center for High Performance Computing. Dell Breakfast Events 20 th June 2016 Happy Sithole The Center for High Performance Computing Dell Breakfast Events 20 th June 2016 Happy Sithole Background: The CHPC in SA CHPC User Community: South Africa CHPC Existing Users Future Users Introduction

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Computing Technology LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton

More information

open source RCP Eclipse based Visualization analysis Python Workflow

open source RCP Eclipse based Visualization analysis Python Workflow An open source not for profit project built on the Eclipse Rich Client Platform (RCP) framework Eclipse based workbench for doing scientific data analysis. It supports: Visualization and analysis of data

More information

Diamond Moonshot Pilot Participation

Diamond Moonshot Pilot Participation Diamond Moonshot Pilot Participation Presentation to Networkshop43 Bill Pulford, Scientific I.T. Coordinator Diamond Light Source Exeter, April 1st 2015 Acknowledgements Stefan Paetow (Janet/UK), DLS System

More information

D6.1 AllScale Computing Infrastructure

D6.1 AllScale Computing Infrastructure H2020 FETHPC-1-2014 An Exascale Programming, Multi-objective Optimisation and Resilience Management Environment Based on Nested Recursive Parallelism Project Number 671603 D6.1 AllScale Computing Infrastructure

More information

Towards a generalised approach for defining, organising and storing metadata from all experiments at the ESRF. by Andy Götz ESRF

Towards a generalised approach for defining, organising and storing metadata from all experiments at the ESRF. by Andy Götz ESRF Towards a generalised approach for defining, organising and storing metadata from all experiments at the ESRF by Andy Götz ESRF IUCR Satellite Workshop on Metadata 29th ECM (Rovinj) 2015 Looking towards

More information

Nexus Builder Developing a Graphical User Interface to create NeXus files

Nexus Builder Developing a Graphical User Interface to create NeXus files Nexus Builder Developing a Graphical User Interface to create NeXus files Lilit Grigoryan, Yerevan State University, Armenia September 9, 2014 Abstract This report describes a project which main purpose

More information

PaNdata-ODI Deliverable: D4.3. PaNdata-ODI. Deliverable D4.3. Deployment of cross-facility metadata searching

PaNdata-ODI Deliverable: D4.3. PaNdata-ODI. Deliverable D4.3. Deployment of cross-facility metadata searching PaNdata-ODI Deliverable D4.3 Deployment of cross-facility metadata searching Grant Agreement Number Project Title Title of Deliverable RI-283556 PaNdata Open Data Infrastructure Deployment of cross-facility

More information

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi GRIDS INTRODUCTION TO GRID INFRASTRUCTURES Fabrizio Gagliardi Dr. Fabrizio Gagliardi is the leader of the EU DataGrid project and designated director of the proposed EGEE (Enabling Grids for E-science

More information

Computing Infrastructure for Online Monitoring and Control of High-throughput DAQ Electronics

Computing Infrastructure for Online Monitoring and Control of High-throughput DAQ Electronics Computing Infrastructure for Online Monitoring and Control of High-throughput DAQ S. Chilingaryan, M. Caselle, T. Dritschler, T. Farago, A. Kopmann, U. Stevanovic, M. Vogelgesang Hardware, Software, and

More information

DEVELOPMENT OF CONE BEAM TOMOGRAPHIC RECONSTRUCTION SOFTWARE MODULE

DEVELOPMENT OF CONE BEAM TOMOGRAPHIC RECONSTRUCTION SOFTWARE MODULE Rajesh et al. : Proceedings of the National Seminar & Exhibition on Non-Destructive Evaluation DEVELOPMENT OF CONE BEAM TOMOGRAPHIC RECONSTRUCTION SOFTWARE MODULE Rajesh V Acharya, Umesh Kumar, Gursharan

More information

Status and future of beamline control software at ESRF. Beamline Control Unit

Status and future of beamline control software at ESRF. Beamline Control Unit Status and future of beamline control software at ESRF E.Papillon, Beamline Control Unit ISDD Instrument Services and Development Divisions SOFTWARE GROUP A.Goetz Data Analysis Unit C.Ferrero Accelerator

More information

Supporting scientific processes in National Facilities

Supporting scientific processes in National Facilities Supporting scientific processes in National Facilities Brian Matthews brian.matthews@stfc.ac.uk Scientific Computing Department STFC Rutherford Appleton Laboratory The data centre at RAL houses >20,000

More information

The Data exacell DXC. J. Ray Scott DXC PI May 17, 2016

The Data exacell DXC. J. Ray Scott DXC PI May 17, 2016 The Data exacell DXC J. Ray Scott DXC PI May 17, 2016 DXC Leadership Mike Levine Co-Scientific Director Co-PI Nick Nystrom Senior Director of Research Co-PI Ralph Roskies Co-Scientific Director Co-PI Robin

More information

SERGEY STEPANOV. Argonne National Laboratory, Advanced Photon Source, Lemont, IL, USA. October 2017, ICALEPCS-2017, Barcelona, Spain

SERGEY STEPANOV. Argonne National Laboratory, Advanced Photon Source, Lemont, IL, USA. October 2017, ICALEPCS-2017, Barcelona, Spain BEAMLINE AND EXPERIMENT AUTOMATIONS FOR THE GENERAL MEDICAL SCIENCES AND CANCER INSTITUTES STRUCTURAL BIOLOGY FACILITY AT THE ADVANCED PHOTON SOURCE (GM/CA@APS) SERGEY STEPANOV Argonne National Laboratory,

More information

Website Implementation D8.1

Website Implementation D8.1 Website Implementation D8.1 Project Number: FP6-045389 Deliverable id: D 8.1 Deliverable name: Website Implementation Date: 31 March 2007 COVER AND CONTROL PAGE OF DOCUMENT Project Acronym: Project Full

More information

Data publication and discovery with Globus

Data publication and discovery with Globus Data publication and discovery with Globus Questions and comments to outreach@globus.org The Globus data publication and discovery services make it easy for institutions and projects to establish collections,

More information

A Comprehensive Study on the Performance of Implicit LS-DYNA

A Comprehensive Study on the Performance of Implicit LS-DYNA 12 th International LS-DYNA Users Conference Computing Technologies(4) A Comprehensive Study on the Performance of Implicit LS-DYNA Yih-Yih Lin Hewlett-Packard Company Abstract This work addresses four

More information

Quality control phantoms and protocol for a tomography system

Quality control phantoms and protocol for a tomography system Quality control phantoms and protocol for a tomography system Lucía Franco 1 1 CT AIMEN, C/Relva 27A O Porriño Pontevedra, Spain, lfranco@aimen.es Abstract Tomography systems for non-destructive testing

More information

EPICS V4 for Diamond Detector Data. Dave Hickin Diamond Light Source EPICS Developers Meeting Saclay 21/10/2014

EPICS V4 for Diamond Detector Data. Dave Hickin Diamond Light Source EPICS Developers Meeting Saclay 21/10/2014 EPICS V4 for Diamond Detector Data Dave Hickin Diamond Light Source EPICS Developers Meeting Saclay 21/10/2014 Objectives Lossless high-performance transfer of detector data and camera images including

More information

Diamond Networks/Computing. Nick Rees January 2011

Diamond Networks/Computing. Nick Rees January 2011 Diamond Networks/Computing Nick Rees January 2011 2008 computing requirements Diamond originally had no provision for central science computing. Started to develop in 2007-2008, with a major development

More information

Helix Nebula Science Cloud Pre-Commercial Procurement pilot. 5 April 2016 Bob Jones, CERN

Helix Nebula Science Cloud Pre-Commercial Procurement pilot. 5 April 2016 Bob Jones, CERN Helix Nebula Science Cloud Pre-Commercial Procurement pilot 5 April 2016 Bob Jones, CERN The Helix Nebula Science Cloud public-private partnership LHC computing resources in 2014 3.4 billion CPU hours

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

Lustre architecture for Riccardo Veraldi for the LCLS IT Team

Lustre architecture for Riccardo Veraldi for the LCLS IT Team Lustre architecture for LCLS@SLAC Riccardo Veraldi for the LCLS IT Team 2 LCLS Experimental Floor 3 LCLS Parameters 4 LCLS Physics LCLS has already had a significant impact on many areas of science, including:

More information

The Use of Cloud Computing Resources in an HPC Environment

The Use of Cloud Computing Resources in an HPC Environment The Use of Cloud Computing Resources in an HPC Environment Bill, Labate, UCLA Office of Information Technology Prakashan Korambath, UCLA Institute for Digital Research & Education Cloud computing becomes

More information

University at Buffalo Center for Computational Research

University at Buffalo Center for Computational Research University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support

More information

Data storage services at KEK/CRC -- status and plan

Data storage services at KEK/CRC -- status and plan Data storage services at KEK/CRC -- status and plan KEK/CRC Hiroyuki Matsunaga Most of the slides are prepared by Koichi Murakami and Go Iwai KEKCC System Overview KEKCC (Central Computing System) The

More information

Processing at ebic/diamond Light Source. Alun Ashton

Processing at ebic/diamond Light Source. Alun Ashton Processing at ebic/diamond Light Source Alun Ashton Computing/Software Support Groups Scientific Computing User office Business IT STFC, CCPs, Universities, Collaborators Analysis Acquisition Controls

More information

Online Data Analysis at European XFEL

Online Data Analysis at European XFEL Online Data Analysis at European XFEL Hans Fangohr Control and Analysis Software Group Senior Data Analysis Scientist DESY, 25 January 2018 2 Outline Introduction & European XFEL status Overview online

More information

4th TERENA NRENs and Grids Workshop, Amsterdam, Dec. 6-7, Marcin Lawenda Poznań Supercomputing and Networking Center

4th TERENA NRENs and Grids Workshop, Amsterdam, Dec. 6-7, Marcin Lawenda Poznań Supercomputing and Networking Center Marcin Lawenda Poznań Supercomputing and Networking Center Why Vlabs? VERY limited access Main reason - COSTS Main GOAL - to make commonly accessible Added Value virtual, remote,,...grid Grid-enabled Virtual

More information

SAXS at the ESRF Beamlines ID01 and ID02

SAXS at the ESRF Beamlines ID01 and ID02 SAXS at the ESRF Beamlines ID01 and ID02 Peter Boesecke European Synchrotron Radiation Facility, Grenoble, France (boesecke@esrf.eu) Contents History Current Situation Online/Offline Treatment (SAXS package/spd

More information

Deliverable D5.3. World-wide E-infrastructure for structural biology. Grant agreement no.: Prototype of the new VRE portal functionality

Deliverable D5.3. World-wide E-infrastructure for structural biology. Grant agreement no.: Prototype of the new VRE portal functionality Deliverable D5.3 Project Title: Project Acronym: World-wide E-infrastructure for structural biology West-Life Grant agreement no.: 675858 Deliverable title: Lead Beneficiary: Prototype of the new VRE portal

More information

Prototype D10.2: Project Web-site

Prototype D10.2: Project Web-site EC Project 257859 Risk and Opportunity management of huge-scale BUSiness community cooperation Prototype D10.2: Project Web-site 29 Dec 2010 Version: 1.0 Thomas Gottron gottron@uni-koblenz.de Institute

More information

Data oriented job submission scheme for the PHENIX user analysis in CCJ

Data oriented job submission scheme for the PHENIX user analysis in CCJ Journal of Physics: Conference Series Data oriented job submission scheme for the PHENIX user analysis in CCJ To cite this article: T Nakamura et al 2011 J. Phys.: Conf. Ser. 331 072025 Related content

More information

The BioHPC Nucleus Cluster & Future Developments

The BioHPC Nucleus Cluster & Future Developments 1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does

More information

Using Web Camera Technology to Monitor Steel Construction

Using Web Camera Technology to Monitor Steel Construction Using Web Camera Technology to Monitor Steel Construction Kerry T. Slattery, Ph.D., P.E. Southern Illinois University Edwardsville Edwardsville, Illinois Many construction companies install electronic

More information

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016

Analyzing the High Performance Parallel I/O on LRZ HPC systems. Sandra Méndez. HPC Group, LRZ. June 23, 2016 Analyzing the High Performance Parallel I/O on LRZ HPC systems Sandra Méndez. HPC Group, LRZ. June 23, 2016 Outline SuperMUC supercomputer User Projects Monitoring Tool I/O Software Stack I/O Analysis

More information

The Virtual Observatory and the IVOA

The Virtual Observatory and the IVOA The Virtual Observatory and the IVOA The Virtual Observatory Emergence of the Virtual Observatory concept by 2000 Concerns about the data avalanche, with in mind in particular very large surveys such as

More information

Italy - Information Day: 2012 FP7 Space WP and 5th Call. Peter Breger Space Research and Development Unit

Italy - Information Day: 2012 FP7 Space WP and 5th Call. Peter Breger Space Research and Development Unit Italy - Information Day: 2012 FP7 Space WP and 5th Call Peter Breger Space Research and Development Unit Content Overview General features Activity 9.1 Space based applications and GMES Activity 9.2 Strengthening

More information

Habanero Operating Committee. January

Habanero Operating Committee. January Habanero Operating Committee January 25 2017 Habanero Overview 1. Execute Nodes 2. Head Nodes 3. Storage 4. Network Execute Nodes Type Quantity Standard 176 High Memory 32 GPU* 14 Total 222 Execute Nodes

More information

Investigation on reconstruction methods applied to 3D terahertz computed Tomography

Investigation on reconstruction methods applied to 3D terahertz computed Tomography Investigation on reconstruction methods applied to 3D terahertz computed Tomography B. Recur, 3 A. Younus, 1, P. Mounaix 1, S. Salort, 2 B. Chassagne, 2 P. Desbarats, 3 J-P. Caumes, 2 and E. Abraham 1

More information

SSRS-4 and the CREMLIN follow up project

SSRS-4 and the CREMLIN follow up project SSRS-4 and the CREMLIN follow up project Towards elaborating a plan for the future collaboration Martin Sandhop SSRS-4 and the CREMLIN follow up project www.cremlin.eu CREMLIN WP5 Workshop: "Towards a

More information

Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA

Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA Event-Synchronized Data Acquisition System of 5 Giga-bps Data Rate for User Experiment at the XFEL Facility, SACLA Mitsuhiro YAMAGA JASRI Oct.11, 2011 @ICALEPCS2011 Contents: Introduction Data Acquisition

More information

Experiences with HP SFS / Lustre in HPC Production

Experiences with HP SFS / Lustre in HPC Production Experiences with HP SFS / Lustre in HPC Production Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Outline» What is HP StorageWorks Scalable File Share (HP SFS)? A Lustre

More information

MPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA

MPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA MPI Optimizations via MXM and FCA for Maximum Performance on LS-DYNA Gilad Shainer 1, Tong Liu 1, Pak Lui 1, Todd Wilde 1 1 Mellanox Technologies Abstract From concept to engineering, and from design to

More information

high performance medical reconstruction using stream programming paradigms

high performance medical reconstruction using stream programming paradigms high performance medical reconstruction using stream programming paradigms This Paper describes the implementation and results of CT reconstruction using Filtered Back Projection on various stream programming

More information

Overview of the CRISP proposal

Overview of the CRISP proposal Overview of the CRISP proposal Context Work Package Structure IT Work Packages Slide: 1 Origin of the CRISP proposal Call publication: End of July 2010 deadline towards end of 2010 4 topics concerning

More information

RZG Visualisation Infrastructure

RZG Visualisation Infrastructure Visualisation of Large Data Sets on Supercomputers RZG Visualisation Infrastructure Markus Rampp Computing Centre (RZG) of the Max-Planck-Society and IPP markus.rampp@rzg.mpg.de LRZ/RZG Course on Visualisation

More information

Parallel Storage Systems for Large-Scale Machines

Parallel Storage Systems for Large-Scale Machines Parallel Storage Systems for Large-Scale Machines Doctoral Showcase Christos FILIPPIDIS (cfjs@outlook.com) Department of Informatics and Telecommunications, National and Kapodistrian University of Athens

More information

Energy efficient real-time computing for extremely large telescopes with GPU

Energy efficient real-time computing for extremely large telescopes with GPU Energy efficient real-time computing for extremely large telescopes with GPU Florian Ferreira & Damien Gratadour Observatoire de Paris & Université Paris Diderot 1 Project #671662 funded by European Commission

More information

Grid technologies, solutions and concepts in the synchrotron Elettra

Grid technologies, solutions and concepts in the synchrotron Elettra Grid technologies, solutions and concepts in the synchrotron Elettra Roberto Pugliese, George Kourousias, Alessio Curri, Milan Prica, Andrea Del Linz Scientific Computing Group, Elettra Sincrotrone, Trieste,

More information

STATUS OF THE ULTRA FAST TOMOGRAPHY EXPERIMENTS CONTROL AT ANKA (THCA06)

STATUS OF THE ULTRA FAST TOMOGRAPHY EXPERIMENTS CONTROL AT ANKA (THCA06) STATUS OF THE ULTRA FAST TOMOGRAPHY EXPERIMENTS CONTROL AT ANKA (THCA06) D. Haas, W. Mexner, T. Spangenberg, A. Cecilia, P. Vagovic, A. Kopmann, M. Balzer, M. Vogelgesang, H. Pasic, S. Chilingaryan 1 David

More information

STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION

STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION STRAIGHT LINE REFERENCE SYSTEM STATUS REPORT ON POISSON SYSTEM CALIBRATION C. Schwalm, DESY, Hamburg, Germany Abstract For the Alignment of the European XFEL, a Straight Line Reference System will be used

More information

-An open source not for profit project -On GitHub DawnScience

-An open source not for profit project -On GitHub DawnScience -An open source not for profit project -On GitHub DawnScience - Diamond Light Source Ltd. and the ESRF are largely publically funded research facilities Collaborations Science Working Group science.eclipse.org

More information

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for Simulia

More information

ICAT Job Portal. a generic job submission system built on a scientific data catalog. IWSG 2013 ETH, Zurich, Switzerland 3-5 June 2013

ICAT Job Portal. a generic job submission system built on a scientific data catalog. IWSG 2013 ETH, Zurich, Switzerland 3-5 June 2013 ICAT Job Portal a generic job submission system built on a scientific data catalog IWSG 2013 ETH, Zurich, Switzerland 3-5 June 2013 Steve Fisher, Kevin Phipps and Dan Rolfe Rutherford Appleton Laboratory

More information

Supporting Data Workflows at STFC. Brian Matthews Scientific Computing Department

Supporting Data Workflows at STFC. Brian Matthews Scientific Computing Department Supporting Data Workflows at STFC Brian Matthews Scientific Computing Department 1 What we do now : Raw Data Management What we want to do : Supporting user workflows What we want to do : sharing and publishing

More information

HPC Architectures. Types of resource currently in use

HPC Architectures. Types of resource currently in use HPC Architectures Types of resource currently in use Reusing this material This work is licensed under a Creative Commons Attribution- NonCommercial-ShareAlike 4.0 International License. http://creativecommons.org/licenses/by-nc-sa/4.0/deed.en_us

More information

MULTIMEDIA TECHNOLOGIES FOR THE USE OF INTERPRETERS AND TRANSLATORS. By Angela Carabelli SSLMIT, Trieste

MULTIMEDIA TECHNOLOGIES FOR THE USE OF INTERPRETERS AND TRANSLATORS. By Angela Carabelli SSLMIT, Trieste MULTIMEDIA TECHNOLOGIES FOR THE USE OF INTERPRETERS AND TRANSLATORS By SSLMIT, Trieste The availability of teaching materials for training interpreters and translators has always been an issue of unquestionable

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato *1, Shuichi Ihara *2, Satoshi Matsuoka *1 *1 Tokyo Institute

More information

Leonhard: a new cluster for Big Data at ETH

Leonhard: a new cluster for Big Data at ETH Leonhard: a new cluster for Big Data at ETH Bernd Rinn, Head of Scientific IT Services Olivier Byrde, Group leader High Performance Computing Bernd Rinn & Olivier Byrde 2017-02-15 1 Agenda Welcome address

More information

Lauetools. A software package for Laue microdiffraction data analysis. https://sourceforge.net/projects/lauetools /

Lauetools. A software package for Laue microdiffraction data analysis. https://sourceforge.net/projects/lauetools / Lauetools A software package for Laue microdiffraction data analysis https://sourceforge.net/projects/lauetools / Motivations Motivations ImageJ LAUE raw data XMAS fit2d Some codes Motivations LAUE raw

More information

Hardware Aspects, Modularity and Integration of an Event Mode Data Acquisition and Instrument Control for the European Spallation Source (ESS)

Hardware Aspects, Modularity and Integration of an Event Mode Data Acquisition and Instrument Control for the European Spallation Source (ESS) Hardware Aspects, Modularity and Integration of an Event Mode Data Acquisition and Instrument Control for the European Spallation Source (ESS) T Gahl 1,5, M Hagen 1, R Hall-Wilton 1,2, S Kolya 1, M Koennecke

More information

Procedures and Resources Plan

Procedures and Resources Plan Project acronym D4Science Project full title DIstributed collaboratories Infrastructure on Grid Enabled Technology 4 Science Project No 212488 Procedures and Resources Plan Deliverable No DSA1.1b January

More information

JULEA: A Flexible Storage Framework for HPC

JULEA: A Flexible Storage Framework for HPC JULEA: A Flexible Storage Framework for HPC Workshop on Performance and Scalability of Storage Systems Michael Kuhn Research Group Scientific Computing Department of Informatics Universität Hamburg 2017-06-22

More information

Splotch: High Performance Visualization using MPI, OpenMP and CUDA

Splotch: High Performance Visualization using MPI, OpenMP and CUDA Splotch: High Performance Visualization using MPI, OpenMP and CUDA Klaus Dolag (Munich University Observatory) Martin Reinecke (MPA, Garching) Claudio Gheller (CSCS, Switzerland), Marzia Rivi (CINECA,

More information

A Breakthrough in Non-Volatile Memory Technology FUJITSU LIMITED

A Breakthrough in Non-Volatile Memory Technology FUJITSU LIMITED A Breakthrough in Non-Volatile Memory Technology & 0 2018 FUJITSU LIMITED IT needs to accelerate time-to-market Situation: End users and applications need instant access to data to progress faster and

More information

e-infrastructures in FP7 INFO DAY - Paris

e-infrastructures in FP7 INFO DAY - Paris e-infrastructures in FP7 INFO DAY - Paris Carlos Morais Pires European Commission DG INFSO GÉANT & e-infrastructure Unit 1 Global challenges with high societal impact Big Science and the role of empowered

More information

DIGITAL STEWARDSHIP SUPPLEMENTARY INFORMATION FORM

DIGITAL STEWARDSHIP SUPPLEMENTARY INFORMATION FORM OMB No. 3137 0071, Exp. Date: 09/30/2015 DIGITAL STEWARDSHIP SUPPLEMENTARY INFORMATION FORM Introduction: IMLS is committed to expanding public access to IMLS-funded research, data and other digital products:

More information

Data Challenges in Photon Science. Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016

Data Challenges in Photon Science. Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016 Data Challenges in Photon Science Manuela Kuhn GridKa School 2016 Karlsruhe, 29th August 2016 Photon Science > Exploration of tiny samples of nanomaterials > Synchrotrons and free electron lasers generate

More information

Digital Image Processing

Digital Image Processing Digital Image Processing SPECIAL TOPICS CT IMAGES Hamid R. Rabiee Fall 2015 What is an image? 2 Are images only about visual concepts? We ve already seen that there are other kinds of image. In this lecture

More information

DATA-SHARING PLAN FOR MOORE FOUNDATION Coral resilience investigated in the field and via a sea anemone model system

DATA-SHARING PLAN FOR MOORE FOUNDATION Coral resilience investigated in the field and via a sea anemone model system DATA-SHARING PLAN FOR MOORE FOUNDATION Coral resilience investigated in the field and via a sea anemone model system GENERAL PHILOSOPHY (Arthur Grossman, Steve Palumbi, and John Pringle) The three Principal

More information

HPC Capabilities at Research Intensive Universities

HPC Capabilities at Research Intensive Universities HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)

More information

Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016

Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016 National Aeronautics and Space Administration Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures 13 November 2016 Carrie Spear (carrie.e.spear@nasa.gov) HPC Architect/Contractor

More information

EUDAT. Towards a pan-european Collaborative Data Infrastructure. Damien Lecarpentier CSC-IT Center for Science, Finland EUDAT User Forum, Barcelona

EUDAT. Towards a pan-european Collaborative Data Infrastructure. Damien Lecarpentier CSC-IT Center for Science, Finland EUDAT User Forum, Barcelona EUDAT Towards a pan-european Collaborative Data Infrastructure Damien Lecarpentier CSC-IT Center for Science, Finland EUDAT User Forum, Barcelona Date: 7 March 2012 EUDAT Key facts Content Project Name

More information

ADVANCING REALITY MODELING WITH CONTEXTCAPTURE

ADVANCING REALITY MODELING WITH CONTEXTCAPTURE ADVANCING REALITY MODELING WITH CONTEXTCAPTURE Knowing the existing conditions of a project is a key asset in any decision process. Governments need to better know their territories, through mapping operations,

More information

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER

CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER CLOUDS OF JINR, UNIVERSITY OF SOFIA AND INRNE JOIN TOGETHER V.V. Korenkov 1, N.A. Kutovskiy 1, N.A. Balashov 1, V.T. Dimitrov 2,a, R.D. Hristova 2, K.T. Kouzmov 2, S.T. Hristov 3 1 Laboratory of Information

More information

Cover Page. The handle holds various files of this Leiden University dissertation.

Cover Page. The handle  holds various files of this Leiden University dissertation. Cover Page The handle http://hdl.handle.net/1887/39638 holds various files of this Leiden University dissertation. Author: Pelt D.M. Title: Filter-based reconstruction methods for tomography Issue Date:

More information

Advanced Research Compu2ng Informa2on Technology Virginia Tech

Advanced Research Compu2ng Informa2on Technology Virginia Tech Advanced Research Compu2ng Informa2on Technology Virginia Tech www.arc.vt.edu Personnel Associate VP for Research Compu6ng: Terry Herdman (herd88@vt.edu) Director, HPC: Vijay Agarwala (vijaykag@vt.edu)

More information

FuncX: A Function Serving Platform for HPC. Ryan Chard 28 Jan 2019

FuncX: A Function Serving Platform for HPC. Ryan Chard 28 Jan 2019 FuncX: A Function Serving Platform for HPC Ryan Chard 28 Jan 2019 Outline - Motivation FuncX: FaaS for HPC Implementation status Preliminary applications - Machine learning inference Automating analysis

More information

XRADIA microxct Manual

XRADIA microxct Manual XRADIA microxct Manual Multiscale CT Lab Table of Contents 1. Introduction and Basics 1.1 Instrument Parts 1.2 Powering up the system 1.3 Preparing your sample 2. TXM Controller 2.1 Starting up 2.2 Finding

More information

ESFRI Strategic Roadmap & RI Long-term sustainability an EC overview

ESFRI Strategic Roadmap & RI Long-term sustainability an EC overview ESFRI Strategic Roadmap & RI Long-term sustainability an EC overview Margarida Ribeiro European Commission DG Research & B.4 - Research Infrastructure Research and What is ESFRI? An informal body composed

More information

Architectures for Scalable Media Object Search

Architectures for Scalable Media Object Search Architectures for Scalable Media Object Search Dennis Sng Deputy Director & Principal Scientist NVIDIA GPU Technology Workshop 10 July 2014 ROSE LAB OVERVIEW 2 Large Database of Media Objects Next- Generation

More information