NNSA Advanced Simulation and Computing An Overview of Data Management Issues

Size: px
Start display at page:

Download "NNSA Advanced Simulation and Computing An Overview of Data Management Issues"

Transcription

1 NNSA Advanced Simulation and Computing An Overview of Data Management Issues Steve Louis Lawrence Livermore National Lab LLNL ASC VIEWS Program Lead TEL: FAX: Presented at the March 16-18, 2004 DMW 2004 Workshop Stanford Linear Accelerator Center UCRL-PRES This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

2 DISCLAIMER This document was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor the University of California nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial products, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or the University of California. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or the University of California, and shall not be used for advertising or product endorsement purposes. This work was performed under the auspices of the U.S. Department of Energy by University of California Lawrence Livermore National Laboratory under contract No. W-7405-Eng-48.

3 Typical ASC application characterization ASC codes at LLNL are complex multi-physics codes. The codes integrate initial value partial differential equations for the conservation of particles, momentum, and energy for important elements and constituents of the devices. Typical calculations use 10,000 to 1,000,000,000 mesh cells, depending on the problem and the desired resolution. The larger problems must be domain decomposed to fit on available memory sizes for distributed memory systems. Partial differential equations are solved with a combination of explicit, implicit, and Monte Carlo techniques. Linear and non-linear solvers play an important role. The problems are integrated in time from an initial state to a final state. DMW 2004 Workshop, SLAC, 16-Mar

4 Typical ASC application characterization Typical phases Interactive problem set-up for a large simulation Running one or more very large 2-D or 3-D calculations Visualization, comparison, validation, archive of results Typically the large calculations need terascale computing ASC uses terascale computing and 1000 s of CPUs now As these codes run over many thousands of processors, huge data dumps must be frequently made to allow for restarts (a.k.a. defensive I/O). Visualization and/or physics files are also saved regularly for subsequent analyses (a.k.a. productive I/O). DMW 2004 Workshop, SLAC, 16-Mar

5 From current LLNL Compute & I/O Model Derived from peak, platform and network bandwidth, historical usage patterns, user input, and projections A week long run can generate up to 30 TB of data, and moving data to archive at 1/10 rate at which it was generated now requires an I/O throughput rate of 495 MB/s However, only half the data might typically need to be stored, so the planned throughput rate to archive is less ~250 MB/s Some users tend to keep data on the platform for postprocessing and visualization purposes, but some users may move entire dataset to a separate visualization server Site-Wide-Global-File-System would significantly shift this model and would reduce the need to explicitly move data DMW 2004 Workshop, SLAC, 16-Mar

6 Some relevant ancient history on data issues Joint DOE/NSF 1998 Workshop Series on Data and Visualization Corridors (DVCs) Oxnard, CA January (Frameworks) Santa Fe, NM March 4-6 (User Requirements) Bodega Bay, CA April 6-8 (Technology Trends) Duck, NC May (Writing Retreat I) Wye River, MD July 5-8 (Writing Retreat II) Report published September 1998, Technical Report CACR-164, There is a pressing need for new methods of handling truly massive datasets, of exploring and visualizing them, and of communicating them over geographic distances (From Foreword of Report on the 1998 DVC Workshop Series ) DMW 2004 Workshop, SLAC, 16-Mar

7 Some 1998 workshop recommendations (most, if not all, are still relevant today ) Establish a vigorous, interdisciplinary program to improve the ability to see and understand output from large data sources Conduct new research and development focused on data management, graphics, and scientific visualization for large-scale data Increase the federal effort and annual investment in DVC R&D by $ M per year over current levels Develop and support a national strategy to incorporate results of DVC R&D into national laboratories, research centers, and national infrastructure programs DMW 2004 Workshop, SLAC, 16-Mar

8 DVC system model, without archiving (a la John van Rosendale, circa 1998) WAN SAN LAN Simulation Engine Data Manipulation Engine Rendering Engne Rendering Engne wow! DMW 2004 Workshop, SLAC, 16-Mar

9 DVC model is still relevant for cluster-based hardware and software now deployed at LLNL New levels of graphics performance based on COTS technologies (with Lintel, Quadrics, nvidia cards) Tight coupling to the compute platform via Lustre and gige links Distributed parallel software stack (open source Chromium, DMX) Parallel, scalable end-user applications (e.g., VisIt, Blockbuster) Multiple display capabilities (Power Walls, office high-resolution displays) Provides the blueprint for future Purple-related visualization and data deployment MCR: 1,116 P4 Compute nodes Common Lustre file system (90TB) 128 Port QsNet Elan3 1,152 Port QsNet Elan3 GbEnet Federated Switch PVC: 58 P4 Render/6 Display nodes Digital Desktop Analog Desktop Displays 3x2 PowerWall DMW 2004 Workshop, SLAC, 16-Mar

10 LLNL Open Computing Facility (OCF) Clusters, Networks, Storage 4 Login nodes 16 Gateway 350 MB/s with 6 Gb-Enet delivered Lustre I/O over 4x1GbE GW GW MDS MDS B451 Thunder 1,024 Port QsNet Elan4 1,004 Quad Itanium2 Compute Nodes 4 Login nodes 32 Gateway 190 MB/s with 4 Gb-Enet delivered Lustre I/O over 2x1GbE GW GW MDS MDS B439 MCR 1,152 Port QsNet Elan3 1,114 Dual P4 Compute Nodes 2 Login nodes 32 Gateway 190 MB/s with 4 Gb-Enet delivered Lustre I/O over 2x1GbE GW GW MDS MDS B439 ALC Port QsNet Elan3 924 Dual P4 Compute Nodes Federated Ethernet B113 LLNL External Backbone SW SW SW SW SW SW SW SW 24 PFTP 1024 I/O Nodes PPC440 TSF OST Heads B OST Heads HPSS Archive BG/L torus, global tree/barrier 65,536 Dual PowerPC 440 Compute Nodes BB/MKS Version 6 Dec 23, 2003 OCF SGS File System Cluster (OFC) OST OST OST OST OST OST OST MM Fiber 1 GigE SM Fiber 1 GigE Copper 1 GigE 2Gig FC GW OST OST OST Terabytes GW PVC Port Elan3 52 Dual P4 Render Nodes OST B451 OST OST 2 Login nodes 6 Dual P4 Display Dual P4 Head FC RAID 146, 73, 36 GB

11 ASC Data Storage and I/O Roadmap Area CY 02 CY 03 CY 04 CY 05 CY 06 CY 07 ASC Perf. Targets SGSFS 30 TF 1 PB Archive 7-20 GB/s parallel FS 1 GB/s to Arch. tape Lustre Lite on Linux SIO Libs Limited App use Archive HPSS 4.1 production DFS COTS DFS in production Lustre Lite limited production Use by key Apps TF 7 PB Archive 100 GB/s parallel FS 10 GB/s to Arch. tape Lustre w. OST striping Broad App use HPSS 4.5 HPSS 5.1 production Metadata fixes Pilot NFSv4 on Linux 180 GB/disk 30 MB/s single disk 300 GB tape capacity 70 MB/s max tape rate Deploy NFSv4 Lustre early prod. HPSS 6.1 replace DCE Integrate NFSv4 w. Lustre 600 GB/disk 80 MB/s single disk 600 GB tape capacity 120 MB/s tape rate 200 TF 25 PB Archive 200 GB/s parallel FS 20 GB/s to Arch.tape Lustre stable prod. Perf. tuned for Lustre TBD 1200 GB/disk 200 MB/s single disk 2 TB tape capacity 200 MB/s tape rate DMW 2004 Workshop, SLAC, 16-Mar

12 LLNL HPSS storage slide from three years ago Accomplishments A 20x performance increase in 15 months (faster nets and disks) PSE Milepost demonstrated 170 MB/s aggregate throughput White-to-HPSS Large single file transfer rates of up to 80MB/s White-to-HPSS Large singe file transfer rates of up to 150MB/s White-to-SGI Challenges Yearly doubling of throughput is needed for next machine MB/s Aggregate Throughput to Storage Moved to Faster Disk on Faster Nodes & multi-node Concurrency Moved to Jumbo GE & Parallel Striping Moved to SP Nodes 120 MB/s 170 MB/s 40 Moved to 20 HPSS 1 MB/s 4 MB/s 6 MB/s 9 MB/s 0 FY96 FY97 FY98 FY99 FY00 FY01 At 170 MB/s, 2TB of data moves to storage in less than 4 hours. A year and a half ago it took two and a half days to move the same amount of data DMW 2004 Workshop, SLAC, 16-Mar

13 Continued improvement in throughput needed to meet requirements of new ASC platforms Aggregate Throughput to Storage Moved to Faster Disk using multiple Htar sessions on multiple nodes Moved to Jumbo GE, Parallel Striping, Faster Disk & Nodes using multiple pftp sessions 12/03 Throughput 854 MB/s 1,037 MB/s MB/s MB/s Moved to HPSS 4 MB/s Moved to SP Nodes 6 MB/s 9 MB/s 170 MB/s 120 MB/s FY96 FY97 FY98 FY99 FY00 FY01 FY02 FY03 Note that this graph represents a 115x performance improvement in four years! DMW 2004 Workshop, SLAC, 16-Mar

14 A Tri-lab historical timeline for motivating improvement in scalable parallel file systems Proposed PathForward activity for SGSFS Propose initial architecture PathForward proposal with OBSD vendor, Panasas born RFQ, analysis, recommend funding open source OBSD development and NFSv4 efforts Begin partnering talks negotiations for OBSD and NFSv4 PathForwards Build initial requirements document SGSFS workshop You re Crazy Tri-Lab joint requirements document complete PathForward team formed to pursue an RFI/RFQ approach, RFI issued, recommend RFQ process Alliance contracts placed with universities on OBSD, overlapped I/O and NFSv4 Lustre PathForward effort is born U Minn Object Archive begins Another workshop: Re-invent Posix I/O? Are We Still Crazy? DMW 2004 Workshop, SLAC, 16-Mar

15 From the June 2003 HECRTF workshop report For info: NNSA Tri-labs (Lee Ward of SNL, Tyce McLarty of LLNL, Gary Grider of LANL) were the ASC I/O representatives at this workshop Overwhelming consensus was POSIX I/O is inadequate 5.5. Data Management and File Systems We believe legacy, POSIX I/O interfaces are incompatible with the full range of hardware architecture choices contemplated The interface does not fully support the needs for parallel support along the I/O path An alternative, appropriate operating system API should be developed for high-end computing systems DMW 2004 Workshop, SLAC, 16-Mar

16 LLNL ASC SDM project organization areas emphasize many data management issues Metadata Infrastructure and Applications Development effort creating metadata-based environment for managing and simplifying data access (Metadata Tools Project) Data Access and Preparation Research project helping scientists explore terabytes of scientific simulation data by permitting ad-hoc queries over the data (Ad Hoc Query Project) Data Discovery Research projects looking at various aspects of feature extraction, data mining, and pattern recognition (Sapphire Project) Data Models and Formats Development effort generating models and file formats to ensure that ASC's scientific data can be freely exchanged (Limit Point Systems contract) DMW 2004 Workshop, SLAC, 16-Mar

17 Scalable Visualization Tool Development for Interactive Exploitation of Large Data Sets VIEWS developed tools (e.g., VisIt, TeraScale Browser provide vehicle for advanced research capabilities Improved large surface handling Parallel distributed volume rendering Topological data representations View-dependent surface rendering Programmable HW graphics rendering time t0 0 isovalue Isosurface area colormap min max DMW 2004 Workshop, SLAC, 16-Mar

18 A take home message: This five-year-old slide on issues is still as relevant as ever Traditional systems for archives and data management not necessarily suitable for the organization of ASC simulation data Traditional systems for realistic rendering and visualization not necessarily suitable for exploration of ASC simulation data ASC needs scalable, flexible methods for: navigation / archive of massive data sets efficient data subset selection / retrieval time-step multivariate animation capability interactive computational monitoring / steering advanced application development and debugging distance and distributed access to massive data DMW 2004 Workshop, SLAC, 16-Mar

19 The Long Term Challenge for ASC: (yet another five-year-old, but relevant slide) Simple linear scaling of existing data management components won t necessarily work A new use paradigm will be required Introduce users to new, innovative tools Motivate and enable vigorous research efforts Explore high-risk, high-reward technologies Identify technology shortfalls and barriers DMW 2004 Workshop, SLAC, 16-Mar

NNSA Advanced Simulation and Computing: Past, Present, Future

NNSA Advanced Simulation and Computing: Past, Present, Future NNSA Advanced Simulation and Computing: Past, Present, Future Steve Louis Integrated Computing and Communications Department Lawrence Livermore National Laboratory 7000 East Avenue, Livermore, CA, 94550-9234

More information

The ASCI/DOD Scalable I/O History and Strategy Run Time Systems and Scalable I/O Team Gary Grider CCN-8 Los Alamos National Laboratory LAUR

The ASCI/DOD Scalable I/O History and Strategy Run Time Systems and Scalable I/O Team Gary Grider CCN-8 Los Alamos National Laboratory LAUR The ASCI/DOD Scalable I/O History and Strategy Run Time Systems and Scalable I/O Team Gary Grider CCN-8 Los Alamos National Laboratory LAUR 042787 05/2004 Parallel File Systems and Parallel I/O Why - From

More information

NIF ICCS Test Controller for Automated & Manual Testing

NIF ICCS Test Controller for Automated & Manual Testing UCRL-CONF-235325 NIF ICCS Test Controller for Automated & Manual Testing J. S. Zielinski October 5, 2007 International Conference on Accelerator and Large Experimental Physics Control Systems Knoxville,

More information

LLNL Lustre Centre of Excellence

LLNL Lustre Centre of Excellence LLNL Lustre Centre of Excellence Mark Gary 4/23/07 This work was performed under the auspices of the U.S. Department of Energy by University of California, Lawrence Livermore National Laboratory under

More information

Coordinating Parallel HSM in Object-based Cluster Filesystems

Coordinating Parallel HSM in Object-based Cluster Filesystems Coordinating Parallel HSM in Object-based Cluster Filesystems Dingshan He, Xianbo Zhang, David Du University of Minnesota Gary Grider Los Alamos National Lab Agenda Motivations Parallel archiving/retrieving

More information

Portable Data Acquisition System

Portable Data Acquisition System UCRL-JC-133387 PREPRINT Portable Data Acquisition System H. Rogers J. Bowers This paper was prepared for submittal to the Institute of Nuclear Materials Management Phoenix, AZ July 2529,1999 May 3,1999

More information

Resource Management at LLNL SLURM Version 1.2

Resource Management at LLNL SLURM Version 1.2 UCRL PRES 230170 Resource Management at LLNL SLURM Version 1.2 April 2007 Morris Jette (jette1@llnl.gov) Danny Auble (auble1@llnl.gov) Chris Morrone (morrone2@llnl.gov) Lawrence Livermore National Laboratory

More information

Table 9. ASCI Data Storage Requirements

Table 9. ASCI Data Storage Requirements Table 9. ASCI Data Storage Requirements 1998 1999 2000 2001 2002 2003 2004 ASCI memory (TB) Storage Growth / Year (PB) Total Storage Capacity (PB) Single File Xfr Rate (GB/sec).44 4 1.5 4.5 8.9 15. 8 28

More information

FY97 ICCS Prototype Specification

FY97 ICCS Prototype Specification FY97 ICCS Prototype Specification John Woodruff 02/20/97 DISCLAIMER This document was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government

More information

METADATA REGISTRY, ISO/IEC 11179

METADATA REGISTRY, ISO/IEC 11179 LLNL-JRNL-400269 METADATA REGISTRY, ISO/IEC 11179 R. K. Pon, D. J. Buttler January 7, 2008 Encyclopedia of Database Systems Disclaimer This document was prepared as an account of work sponsored by an agency

More information

Scalable I/O, File Systems, and Storage Networks R&D at Los Alamos LA-UR /2005. Gary Grider CCN-9

Scalable I/O, File Systems, and Storage Networks R&D at Los Alamos LA-UR /2005. Gary Grider CCN-9 Scalable I/O, File Systems, and Storage Networks R&D at Los Alamos LA-UR-05-2030 05/2005 Gary Grider CCN-9 Background Disk2500 TeraBytes Parallel I/O What drives us? Provide reliable, easy-to-use, high-performance,

More information

Bridging The Gap Between Industry And Academia

Bridging The Gap Between Industry And Academia Bridging The Gap Between Industry And Academia 14 th Annual Security & Compliance Summit Anaheim, CA Dilhan N Rodrigo Managing Director-Smart Grid Information Trust Institute/CREDC University of Illinois

More information

Testing PL/SQL with Ounit UCRL-PRES

Testing PL/SQL with Ounit UCRL-PRES Testing PL/SQL with Ounit UCRL-PRES-215316 December 21, 2005 Computer Scientist Lawrence Livermore National Laboratory Arnold Weinstein Filename: OUNIT Disclaimer This document was prepared as an account

More information

Alignment and Micro-Inspection System

Alignment and Micro-Inspection System UCRL-ID-132014 Alignment and Micro-Inspection System R. L. Hodgin, K. Moua, H. H. Chau September 15, 1998 Lawrence Livermore National Laboratory This is an informal report intended primarily for internal

More information

The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed. November 2008

The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed. November 2008 1 The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed November 2008 Extending leadership to the HPC community November 2008 2 Motivation Collaborations Hyperion Cluster Timeline

More information

High Scalability Resource Management with SLURM Supercomputing 2008 November 2008

High Scalability Resource Management with SLURM Supercomputing 2008 November 2008 High Scalability Resource Management with SLURM Supercomputing 2008 November 2008 Morris Jette (jette1@llnl.gov) LLNL-PRES-408498 Lawrence Livermore National Laboratory What is SLURM Simple Linux Utility

More information

Parallel File Systems. John White Lawrence Berkeley National Lab

Parallel File Systems. John White Lawrence Berkeley National Lab Parallel File Systems John White Lawrence Berkeley National Lab Topics Defining a File System Our Specific Case for File Systems Parallel File Systems A Survey of Current Parallel File Systems Implementation

More information

Lustre at Scale The LLNL Way

Lustre at Scale The LLNL Way Lustre at Scale The LLNL Way D. Marc Stearman Lustre Administration Lead Livermore uting - LLNL This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory

More information

ENDF/B-VII.1 versus ENDFB/-VII.0: What s Different?

ENDF/B-VII.1 versus ENDFB/-VII.0: What s Different? LLNL-TR-548633 ENDF/B-VII.1 versus ENDFB/-VII.0: What s Different? by Dermott E. Cullen Lawrence Livermore National Laboratory P.O. Box 808/L-198 Livermore, CA 94550 March 17, 2012 Approved for public

More information

DOE EM Web Refresh Project and LLNL Building 280

DOE EM Web Refresh Project and LLNL Building 280 STUDENT SUMMER INTERNSHIP TECHNICAL REPORT DOE EM Web Refresh Project and LLNL Building 280 DOE-FIU SCIENCE & TECHNOLOGY WORKFORCE DEVELOPMENT PROGRAM Date submitted: September 14, 2018 Principal Investigators:

More information

Information to Insight

Information to Insight Information to Insight in a Counterterrorism Context Robert Burleson Lawrence Livermore National Laboratory UCRL-PRES-211319 UCRL-PRES-211466 UCRL-PRES-211485 UCRL-PRES-211467 This work was performed under

More information

CHANGING THE WAY WE LOOK AT NUCLEAR

CHANGING THE WAY WE LOOK AT NUCLEAR CHANGING THE WAY WE LOOK AT NUCLEAR John Hopkins Chairman and CEO, NuScale Power NuScale UK Supplier Day 13 July 2016 Acknowledgement and Disclaimer This material is based upon work supported by the Department

More information

ESNET Requirements for Physics Reseirch at the SSCL

ESNET Requirements for Physics Reseirch at the SSCL SSCLSR1222 June 1993 Distribution Category: 0 L. Cormell T. Johnson ESNET Requirements for Physics Reseirch at the SSCL Superconducting Super Collider Laboratory Disclaimer Notice I This report was prepared

More information

The NNSA ASCI Program: Advanced Simulation and Computing

The NNSA ASCI Program: Advanced Simulation and Computing The NNSA ASCI Program: Advanced Simulation and Computing Steve Louis Lawrence Livermore National Laboratory 7000 East Avenue, Livermore, CA, 94550-0234 Phone:+1-925-422-1550 FAX: +1-925-423-8715 E-mail:

More information

REAL-TIME MULTIPLE NETWORKED VIEWER CAPABILITY OF THE DIII D EC DATA ACQUISITION SYSTEM

REAL-TIME MULTIPLE NETWORKED VIEWER CAPABILITY OF THE DIII D EC DATA ACQUISITION SYSTEM GA A24792 REAL-TIME MULTIPLE NETWORKED VIEWER CAPABILITY OF THE DIII D EC DATA ACQUISITION SYSTEM by D. PONCE, I.A. GORELOV, H.K. CHIU, F.W. BAITY, JR. AUGUST 2004 QTYUIOP DISCLAIMER This report was prepared

More information

Architecting Storage for Semiconductor Design: Manufacturing Preparation

Architecting Storage for Semiconductor Design: Manufacturing Preparation White Paper Architecting Storage for Semiconductor Design: Manufacturing Preparation March 2012 WP-7157 EXECUTIVE SUMMARY The manufacturing preparation phase of semiconductor design especially mask data

More information

Architectural Challenges and Solutions for Petascale Visualization and Analysis. Hank Childs Lawrence Livermore National Laboratory June 27, 2007

Architectural Challenges and Solutions for Petascale Visualization and Analysis. Hank Childs Lawrence Livermore National Laboratory June 27, 2007 Architectural Challenges and Solutions for Petascale Visualization and Analysis Hank Childs Lawrence Livermore National Laboratory June 27, 2007 Work performed under the auspices of the U.S. Department

More information

Parallel File Systems Compared

Parallel File Systems Compared Parallel File Systems Compared Computing Centre (SSCK) University of Karlsruhe, Germany Laifer@rz.uni-karlsruhe.de page 1 Outline» Parallel file systems (PFS) Design and typical usage Important features

More information

Development of Web Applications for Savannah River Site

Development of Web Applications for Savannah River Site STUDENT SUMMER INTERNSHIP TECHNICAL REPORT Development of Web Applications for Savannah River Site DOE-FIU SCIENCE & TECHNOLOGY WORKFORCE DEVELOPMENT PROGRAM Date submitted: October 17, 2014 Principal

More information

NATIONAL GEOSCIENCE DATA REPOSITORY SYSTEM

NATIONAL GEOSCIENCE DATA REPOSITORY SYSTEM NATIONAL GEOSCIENCE DATA REPOSITORY SYSTEM PHASE II: PLANNING AND PILOT STUDY PROGRESS REPORT 3rd Quarter July September, 1995 - Submitted by the AMERICAN GEOLOGICAL INSTITUTE to the Office of Fossil Energy,

More information

Electronic Weight-and-Dimensional-Data Entry in a Computer Database

Electronic Weight-and-Dimensional-Data Entry in a Computer Database UCRL-ID- 132294 Electronic Weight-and-Dimensional-Data Entry in a Computer Database J. Estill July 2,1996 This is an informal report intended primarily for internal or limited external distribution. The

More information

Organizational Update: December 2015

Organizational Update: December 2015 Organizational Update: December 2015 David Hudak Doug Johnson Alan Chalker www.osc.edu Slide 1 OSC Organizational Update Leadership changes State of OSC Roadmap Web app demonstration (if time) Slide 2

More information

PJM Interconnection Smart Grid Investment Grant Update

PJM Interconnection Smart Grid Investment Grant Update PJM Interconnection Smart Grid Investment Grant Update Bill Walker walkew@pjm.com NASPI Work Group Meeting October 12-13, 2011 Acknowledgment: "This material is based upon work supported by the Department

More information

GPFS on a Cray XT. Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009

GPFS on a Cray XT. Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009 GPFS on a Cray XT Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009 Outline NERSC Global File System GPFS Overview Comparison of Lustre and GPFS

More information

Trends in Data Protection and Restoration Technologies. Mike Fishman, EMC 2 Corporation

Trends in Data Protection and Restoration Technologies. Mike Fishman, EMC 2 Corporation Trends in Data Protection and Restoration Technologies Mike Fishman, EMC 2 Corporation SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA unless otherwise noted. Member

More information

Data Movement & Storage Using the Data Capacitor Filesystem

Data Movement & Storage Using the Data Capacitor Filesystem Data Movement & Storage Using the Data Capacitor Filesystem Justin Miller jupmille@indiana.edu http://pti.iu.edu/dc Big Data for Science Workshop July 2010 Challenges for DISC Keynote by Alex Szalay identified

More information

A VERSATILE DIGITAL VIDEO ENGINE FOR SAFEGUARDS AND SECURITY APPLICATIONS

A VERSATILE DIGITAL VIDEO ENGINE FOR SAFEGUARDS AND SECURITY APPLICATIONS A VERSATLE DGTAL VDEO ENGNE FOR SAFEGUARDS AND SECURTY APPLCATONS William R. Hale Charles S. Johnson Sandia National Laboratories Albuquerque, NM 8785 Abstract The capture and storage of video images have

More information

Protecting Control Systems from Cyber Attack: A Primer on How to Safeguard Your Utility May 15, 2012

Protecting Control Systems from Cyber Attack: A Primer on How to Safeguard Your Utility May 15, 2012 Protecting Control Systems from Cyber Attack: A Primer on How to Safeguard Your Utility May 15, 2012 Paul Kalv Electric Director, Chief Smart Grid Systems Architect, City of Leesburg Doug Westlund CEO,

More information

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007)

EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) EMC Business Continuity for Microsoft SharePoint Server (MOSS 2007) Enabled by EMC Symmetrix DMX-4 4500 and EMC Symmetrix Remote Data Facility (SRDF) Reference Architecture EMC Global Solutions 42 South

More information

Stereo Vision Based Automated Grasp Planning

Stereo Vision Based Automated Grasp Planning UCRLSjC-118906 PREPRINT Stereo Vision Based Automated Grasp Planning K. Wilhelmsen L. Huber.L. Cadapan D. Silva E. Grasz This paper was prepared for submittal to the American NuclearSociety 6th Topical

More information

Intelligent Grid and Lessons Learned. April 26, 2011 SWEDE Conference

Intelligent Grid and Lessons Learned. April 26, 2011 SWEDE Conference Intelligent Grid and Lessons Learned April 26, 2011 SWEDE Conference Outline 1. Background of the CNP Vision for Intelligent Grid 2. Implementation of the CNP Intelligent Grid 3. Lessons Learned from the

More information

Clusters Using Nonlinear Magnification

Clusters Using Nonlinear Magnification t. LA-UR- 98-2776 Approved for public refease; distribution is unlimited. Title: Visualization of High-Dimensional Clusters Using Nonlinear Magnification Author(s) T. Alan Keahey Graphics and Visualization

More information

Optimizing Bandwidth Utilization in Packet Based Telemetry Systems. Jeffrey R Kalibjian

Optimizing Bandwidth Utilization in Packet Based Telemetry Systems. Jeffrey R Kalibjian UCRL-JC-122361 PREPRINT Optimizing Bandwidth Utilization in Packet Based Telemetry Systems Jeffrey R Kalibjian RECEIVED NOV 17 1995 This paper was prepared for submittal to the 1995 International Telemetry

More information

Filesystems on SSCK's HP XC6000

Filesystems on SSCK's HP XC6000 Filesystems on SSCK's HP XC6000 Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Overview» Overview of HP SFS at SSCK HP StorageWorks Scalable File Share (SFS) based on

More information

Fast Forward I/O & Storage

Fast Forward I/O & Storage Fast Forward I/O & Storage Eric Barton Lead Architect 1 Department of Energy - Fast Forward Challenge FastForward RFP provided US Government funding for exascale research and development Sponsored by 7

More information

LUSTRE NETWORKING High-Performance Features and Flexible Support for a Wide Array of Networks White Paper November Abstract

LUSTRE NETWORKING High-Performance Features and Flexible Support for a Wide Array of Networks White Paper November Abstract LUSTRE NETWORKING High-Performance Features and Flexible Support for a Wide Array of Networks White Paper November 2008 Abstract This paper provides information about Lustre networking that can be used

More information

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research

A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research A High-Performance Storage and Ultra- High-Speed File Transfer Solution for Collaborative Life Sciences Research Storage Platforms with Aspera Overview A growing number of organizations with data-intensive

More information

Java Based Open Architecture Controller

Java Based Open Architecture Controller Preprint UCRL-JC- 137092 Java Based Open Architecture Controller G. Weinet? This article was submitted to World Automation Conference, Maui, HI, June 1 I- 16,200O U.S. Department of Energy January 13,200O

More information

Experiences with HP SFS / Lustre in HPC Production

Experiences with HP SFS / Lustre in HPC Production Experiences with HP SFS / Lustre in HPC Production Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Outline» What is HP StorageWorks Scalable File Share (HP SFS)? A Lustre

More information

HPC Customer Requirements for OpenFabrics Software

HPC Customer Requirements for OpenFabrics Software HPC Customer Requirements for OpenFabrics Software Matt Leininger, Ph.D. Sandia National Laboratories Scalable Computing R&D Livermore, CA 16 November 2006 I'll focus on software requirements (well maybe)

More information

EMC Integrated Infrastructure for VMware. Business Continuity

EMC Integrated Infrastructure for VMware. Business Continuity EMC Integrated Infrastructure for VMware Business Continuity Enabled by EMC Celerra and VMware vcenter Site Recovery Manager Reference Architecture Copyright 2009 EMC Corporation. All rights reserved.

More information

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp STORAGE CONSOLIDATION WITH IP STORAGE David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in

More information

ARISTA: Improving Application Performance While Reducing Complexity

ARISTA: Improving Application Performance While Reducing Complexity ARISTA: Improving Application Performance While Reducing Complexity October 2008 1.0 Problem Statement #1... 1 1.1 Problem Statement #2... 1 1.2 Previous Options: More Servers and I/O Adapters... 1 1.3

More information

Advanced Synchrophasor Protocol DE-OE-859. Project Overview. Russell Robertson March 22, 2017

Advanced Synchrophasor Protocol DE-OE-859. Project Overview. Russell Robertson March 22, 2017 Advanced Synchrophasor Protocol DE-OE-859 Project Overview Russell Robertson March 22, 2017 1 ASP Project Scope For the demanding requirements of synchrophasor data: Document a vendor-neutral publish-subscribe

More information

GA A26400 CUSTOMIZABLE SCIENTIFIC WEB-PORTAL FOR DIII-D NUCLEAR FUSION EXPERIMENT

GA A26400 CUSTOMIZABLE SCIENTIFIC WEB-PORTAL FOR DIII-D NUCLEAR FUSION EXPERIMENT GA A26400 CUSTOMIZABLE SCIENTIFIC WEB-PORTAL FOR DIII-D NUCLEAR FUSION EXPERIMENT by G. ABLA, N. KIM, and D.P. SCHISSEL APRIL 2009 DISCLAIMER This report was prepared as an account of work sponsored by

More information

User s Perspective for Ten Gigabit

User s Perspective for Ten Gigabit User s Perspective for Ten Gigabit Ethernet Michael Bennett Lawrence Berkeley National Lab IEEE HSSG meeting Coer d Alene, Idaho 1-4 June 1999 Background About LBNL Leading edge research in the biological,

More information

Performance Characterization of ONTAP Cloud in Azure with Application Workloads

Performance Characterization of ONTAP Cloud in Azure with Application Workloads Technical Report Performance Characterization of ONTAP Cloud in NetApp Data Fabric Group, NetApp March 2018 TR-4671 Abstract This technical report examines the performance and fit of application workloads

More information

Store Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete

Store Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete Store Process Analyze Collaborate Archive Cloud The HPC Storage Leader Invent Discover Compete 1 DDN Who We Are 2 We Design, Deploy and Optimize Storage Systems Which Solve HPC, Big Data and Cloud Business

More information

EMC Symmetrix DMX Series The High End Platform. Tom Gorodecki EMC

EMC Symmetrix DMX Series The High End Platform. Tom Gorodecki EMC 1 EMC Symmetrix Series The High End Platform Tom Gorodecki EMC 2 EMC Symmetrix -3 Series World s Most Trusted Storage Platform Symmetrix -3: World s Largest High-end Storage Array -3 950: New High-end

More information

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp

STORAGE CONSOLIDATION WITH IP STORAGE. David Dale, NetApp STORAGE CONSOLIDATION WITH IP STORAGE David Dale, NetApp SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in

More information

Storage Supporting DOE Science

Storage Supporting DOE Science Storage Supporting DOE Science Jason Hick jhick@lbl.gov NERSC LBNL http://www.nersc.gov/nusers/systems/hpss/ http://www.nersc.gov/nusers/systems/ngf/ May 12, 2011 The Production Facility for DOE Office

More information

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER

QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER QLIKVIEW SCALABILITY BENCHMARK WHITE PAPER Measuring Business Intelligence Throughput on a Single Server QlikView Scalability Center Technical White Paper December 2012 qlikview.com QLIKVIEW THROUGHPUT

More information

@ST1. JUt EVALUATION OF A PROTOTYPE INFRASOUND SYSTEM ABSTRACT. Tom Sandoval (Contractor) Los Alamos National Laboratory Contract # W7405-ENG-36

@ST1. JUt EVALUATION OF A PROTOTYPE INFRASOUND SYSTEM ABSTRACT. Tom Sandoval (Contractor) Los Alamos National Laboratory Contract # W7405-ENG-36 EVALUATION OF A PROTOTYPE INFRASOUND SYSTEM Rod Whitaker Tom Sandoval (Contractor) Los Alamos National Laboratory Contract # W745-ENG-36 Dale Breding, Dick Kromer Tim McDonald (Contractor) Sandia National

More information

EMC CLARiiON Backup Storage Solutions

EMC CLARiiON Backup Storage Solutions Engineering White Paper Backup-to-Disk Guide with Computer Associates BrightStor ARCserve Backup Abstract This white paper describes how to configure EMC CLARiiON CX series storage systems with Computer

More information

Partner Pre-Install Checklist: Common Service Platform Collector (CSP-C) for Smart Portal 0.5

Partner Pre-Install Checklist: Common Service Platform Collector (CSP-C) for Smart Portal 0.5 Partner Support Service Partner Pre-Install Checklist: Common Service Platform Collector (CSP-C) for Smart Portal 0.5 Cisco Corporate Headquarters 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com

More information

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance 11 th International LS-DYNA Users Conference Computing Technology LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance Gilad Shainer 1, Tong Liu 2, Jeff Layton

More information

VisIt Overview. VACET: Chief SW Engineer ASC: V&V Shape Char. Lead. Hank Childs. Supercomputing 2006 Tampa, Florida November 13, 2006

VisIt Overview. VACET: Chief SW Engineer ASC: V&V Shape Char. Lead. Hank Childs. Supercomputing 2006 Tampa, Florida November 13, 2006 VisIt Overview Hank Childs VACET: Chief SW Engineer ASC: V&V Shape Char. Lead Supercomputing 2006 Tampa, Florida November 13, 2006 27B element Rayleigh-Taylor Instability (MIRANDA, BG/L) This is UCRL-PRES-226373

More information

Massively Scalable File Storage. Philippe Nicolas, KerStor

Massively Scalable File Storage. Philippe Nicolas, KerStor Philippe Nicolas, KerStor SNIA Legal Notice The material contained in this tutorial is copyrighted by the SNIA. Member companies and individuals may use this material in presentations and literature under

More information

High-Energy Physics Data-Storage Challenges

High-Energy Physics Data-Storage Challenges High-Energy Physics Data-Storage Challenges Richard P. Mount SLAC SC2003 Experimental HENP Understanding the quantum world requires: Repeated measurement billions of collisions Large (500 2000 physicist)

More information

Delivering HPC Performance at Scale

Delivering HPC Performance at Scale Delivering HPC Performance at Scale October 2011 Joseph Yaworski QLogic Director HPC Product Marketing Office: 610-233-4854 Joseph.Yaworski@QLogic.com Agenda QLogic Overview TrueScale Performance Design

More information

Performance and Scalability Benchmark: Siebel CRM Release 7 on HP-UX Servers and Oracle9i Database. An Oracle White Paper Released October 2003

Performance and Scalability Benchmark: Siebel CRM Release 7 on HP-UX Servers and Oracle9i Database. An Oracle White Paper Released October 2003 Performance and Scalability Benchmark: Siebel CRM Release 7 on HP-UX Servers and Oracle9i Database An Oracle White Paper Released October 2003 Performance and Scalability Benchmark: Siebel CRM Release

More information

Overview. Livermore Computing (LC) Organization. LC Hardware Components. LC Strategy. LC Practices. BlueGene/L 360 teraflops. 32K Blue Gene/L at LLNL

Overview. Livermore Computing (LC) Organization. LC Hardware Components. LC Strategy. LC Practices. BlueGene/L 360 teraflops. 32K Blue Gene/L at LLNL LLNL Computing Center Overview, Strategy and Operational Practices Kimberly Deputy Division Leader, High Performance Systems Computation Lawrence Livermore National Laboratory cupps2@llnl.gov Work performed

More information

PJM Interconnection Smart Grid Investment Grant Update

PJM Interconnection Smart Grid Investment Grant Update PJM Interconnection Smart Grid Investment Grant Update Bill Walker walkew@pjm.com NASPI Work Group Meeting October 22-24, 2013 Acknowledgment: "This material is based upon work supported by the Department

More information

Testing of PVODE, a Parallel ODE Solver

Testing of PVODE, a Parallel ODE Solver Testing of PVODE, a Parallel ODE Solver Michael R. Wittman Lawrence Livermore National Laboratory Center for Applied Scientific Computing UCRL-ID-125562 August 1996 DISCLAIMER This document was prepared

More information

WISP. Western Interconnection Synchrophasor Program. Vickie VanZandt & Dan Brancaccio NASPI Work Group Meeting October 17-18, 2012

WISP. Western Interconnection Synchrophasor Program. Vickie VanZandt & Dan Brancaccio NASPI Work Group Meeting October 17-18, 2012 WISP Western Interconnection Synchrophasor Program Vickie VanZandt & Dan Brancaccio NASPI Work Group Meeting October 17-18, 2012 1 Acknowledgement and Disclaimer Acknowledgment: This material is based

More information

Scaling a Global File System to the Greatest Possible Extent, Performance, Capacity, and Number of Users

Scaling a Global File System to the Greatest Possible Extent, Performance, Capacity, and Number of Users Scaling a Global File System to the Greatest Possible Extent, Performance, Capacity, and Number of Users Phil Andrews, Bryan Banister, Patricia Kovatch, Chris Jordan San Diego Supercomputer Center University

More information

Surveillance Dell EMC Storage with Bosch Video Recording Manager

Surveillance Dell EMC Storage with Bosch Video Recording Manager Surveillance Dell EMC Storage with Bosch Video Recording Manager Sizing and Configuration Guide H13970 REV 2.1 Copyright 2015-2017 Dell Inc. or its subsidiaries. All rights reserved. Published December

More information

Integrated Training for the Department of Energy Standard Security System

Integrated Training for the Department of Energy Standard Security System UCRL-JC-126233 PREPRINT Integrated Training for the Department of Energy Standard Security System M. Wadsworth This paperwaspreparedforsubmittalto the 13th American Defense PreparednessAssociation Symposium

More information

Go SOLAR Online Permitting System A Guide for Applicants November 2012

Go SOLAR Online Permitting System A Guide for Applicants November 2012 Go SOLAR Online Permitting System A Guide for Applicants November 2012 www.broward.org/gogreen/gosolar Disclaimer This guide was prepared as an account of work sponsored by the United States Department

More information

Protect enterprise data, achieve long-term data retention

Protect enterprise data, achieve long-term data retention Technical white paper Protect enterprise data, achieve long-term data retention HP StoreOnce Catalyst and Symantec NetBackup OpenStorage Table of contents Introduction 2 Technology overview 3 HP StoreOnce

More information

Microsoft SharePoint Server 2010 on Dell Systems

Microsoft SharePoint Server 2010 on Dell Systems Microsoft SharePoint Server 2010 on Dell Systems Solutions for up to 10,000 users This document is for informational purposes only. Dell reserves the right to make changes without further notice to any

More information

QLogic 2500 Series FC HBAs Accelerate Application Performance

QLogic 2500 Series FC HBAs Accelerate Application Performance QLogic 2500 Series FC HBAs Accelerate QLogic 8Gb Fibre Channel Adapters from Cavium: Planning for Future Requirements 8Gb Performance Meets the Needs of Next-generation Data Centers EXECUTIVE SUMMARY It

More information

HPC Considerations for Scalable Multidiscipline CAE Applications on Conventional Linux Platforms. Author: Correspondence: ABSTRACT:

HPC Considerations for Scalable Multidiscipline CAE Applications on Conventional Linux Platforms. Author: Correspondence: ABSTRACT: HPC Considerations for Scalable Multidiscipline CAE Applications on Conventional Linux Platforms Author: Stan Posey Panasas, Inc. Correspondence: Stan Posey Panasas, Inc. Phone +510 608 4383 Email sposey@panasas.com

More information

UK LUG 10 th July Lustre at Exascale. Eric Barton. CTO Whamcloud, Inc Whamcloud, Inc.

UK LUG 10 th July Lustre at Exascale. Eric Barton. CTO Whamcloud, Inc Whamcloud, Inc. UK LUG 10 th July 2012 Lustre at Exascale Eric Barton CTO Whamcloud, Inc. eeb@whamcloud.com Agenda Exascale I/O requirements Exascale I/O model 3 Lustre at Exascale - UK LUG 10th July 2012 Exascale I/O

More information

Entergy Phasor Project Phasor Gateway Implementation

Entergy Phasor Project Phasor Gateway Implementation Entergy Phasor Project Phasor Gateway Implementation Floyd Galvan, Entergy Tim Yardley, University of Illinois Said Sidiqi, TVA Denver, CO - June 5, 2012 1 Entergy Project Summary PMU installations on

More information

Adding a System Call to Plan 9

Adding a System Call to Plan 9 Adding a System Call to Plan 9 John Floren (john@csplan9.rit.edu) Sandia National Laboratories Livermore, CA 94551 DOE/NNSA Funding Statement Sandia is a multiprogram laboratory operated by Sandia Corporation,

More information

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution

EMC Virtual Infrastructure for Microsoft Applications Data Center Solution EMC Virtual Infrastructure for Microsoft Applications Data Center Solution Enabled by EMC Symmetrix V-Max and Reference Architecture EMC Global Solutions Copyright and Trademark Information Copyright 2009

More information

An Introduction to GPFS

An Introduction to GPFS IBM High Performance Computing July 2006 An Introduction to GPFS gpfsintro072506.doc Page 2 Contents Overview 2 What is GPFS? 3 The file system 3 Application interfaces 4 Performance and scalability 4

More information

Alliance Release 7.2. Jambul TOLOGONOV. April 2017

Alliance Release 7.2. Jambul TOLOGONOV. April 2017 Alliance Release 7.2 Jambul TOLOGONOV April 2017 Release 7.2 Agenda New features, security enhancements Release Timeline Impact points: Alliance products Migration phases Impact points: OS and hardware

More information

and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.

and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. '4 L NMAS CORE: UPDATE AND CURRENT DRECTONS DSCLAMER This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any

More information

2017 Resource Allocations Competition Results

2017 Resource Allocations Competition Results 2017 Resource Allocations Competition Results Table of Contents Executive Summary...3 Computational Resources...5 CPU Allocations...5 GPU Allocations...6 Cloud Allocations...6 Storage Resources...6 Acceptance

More information

Parallel File Systems for HPC

Parallel File Systems for HPC Introduction to Scuola Internazionale Superiore di Studi Avanzati Trieste November 2008 Advanced School in High Performance and Grid Computing Outline 1 The Need for 2 The File System 3 Cluster & A typical

More information

Shared Parallel Filesystems in Heterogeneous Linux Multi-Cluster Environments

Shared Parallel Filesystems in Heterogeneous Linux Multi-Cluster Environments LCI HPC Revolution 2005 26 April 2005 Shared Parallel Filesystems in Heterogeneous Linux Multi-Cluster Environments Matthew Woitaszek matthew.woitaszek@colorado.edu Collaborators Organizations National

More information

OPTIMIZING CHEMICAL SENSOR ARRAY SIZES

OPTIMIZING CHEMICAL SENSOR ARRAY SIZES OPTIMIZING CHEMICAL SENSOR ARRAY SIZES G. C. Osbourn, R. F. Martinez, J. W. Bartholomew, W. G. Yelton, A. J. Ricco* Sandia National Laboratories, Albuquerque, NM 87 185-1423, "ACLARA Biosciences, Inc.,

More information

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance

Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance Dell EMC Ready Bundle for HPC Digital Manufacturing Dassault Systѐmes Simulia Abaqus Performance This Dell EMC technical white paper discusses performance benchmarking results and analysis for Simulia

More information

SmartSacramento Distribution Automation

SmartSacramento Distribution Automation SmartSacramento Distribution Automation Presented by Michael Greenhalgh, Project Manager Lora Anguay, Sr. Project Manager Agenda 1. About SMUD 2. Distribution Automation Project Overview 3. Data Requirements

More information

Backup Appliances. Geir Aasarmoen og Kåre Juvkam

Backup Appliances. Geir Aasarmoen og Kåre Juvkam Backup Appliances Geir Aasarmoen og Kåre Juvkam Agenda 1 Why Backup Appliance 2 Backup Exec 3600 Appliance 3 NetBackup Appliances Technology Days 2013 2 Traditional Backup Solutions: Complexity Bare Metal

More information

MULTIPLE HIGH VOLTAGE MODULATORS OPERATING INDEPENDENTLY FROM A SINGLE COMMON 100 kv dc POWER SUPPLY

MULTIPLE HIGH VOLTAGE MODULATORS OPERATING INDEPENDENTLY FROM A SINGLE COMMON 100 kv dc POWER SUPPLY GA A26447 MULTIPLE HIGH VOLTAGE MODULATORS OPERATING INDEPENDENTLY FROM A SINGLE COMMON 100 kv dc POWER SUPPLY by W.L. McDANIEL, P. HUYNH, D.D. ANASTASI, J.F. TOOKER and D.M. HOYT JUNE 2009 DISCLAIMER

More information

Got Burst Buffer. Now What? Early experiences, exciting future possibilities, and what we need from the system to make it work

Got Burst Buffer. Now What? Early experiences, exciting future possibilities, and what we need from the system to make it work Got Burst Buffer. Now What? Early experiences, exciting future possibilities, and what we need from the system to make it work The Salishan Conference on High-Speed Computing April 26, 2016 Adam Moody

More information

Landmark SeisWorks with EMC Upstream Application Accelerator

Landmark SeisWorks with EMC Upstream Application Accelerator Landmark SeisWorks with EMC Upstream Application Accelerator Application performance 2-3 times faster than traditional NAS Abstract This white paper summarizes the findings from two sets of tests performed

More information