Realtime Data Analytics at NERSC
|
|
- Darlene Francis
- 6 years ago
- Views:
Transcription
1 Realtime Data Analytics at NERSC Prabhat XLDB May 24,
2 Lawrence Berkeley National Laboratory - 2 -
3 National Energy Research Scientific Computing Center 3
4 NERSC is the Production HPC & Data Facility for DOE Largest funder of physical science research in U.S. Biological and Environmental Systems Applied Math, Exascale Materials, Chemistry, Geophysics Particle Physics, Astrophysics Nuclear Physics Fusion Energy, Plasma Physics - 4 -
5 Focus on Science NERSC supports the broad mission needs of the six DOE Office of Science program offices 6,000 users and 750 projects Extensive science engagement and user training programs 2078 refereed publications in
6 NERSC Edison: Cray XC PB Local Scratch 163 GB/s 80 GB/s Global Scratch 3.6 PB 5 x SFA12KE 5,576 nodes, 133K, 2.4GHz Intel IvyBridge Cores, 357TB RAM 16x FDR IB 50 GB/s /project 5 PB DDN9900 & NexSAN Cori: Cray XC-40 Ph1: 1630 nodes, 2.3GHz Intel Haswell Cores, 203TB RAM Ph2: >9300 nodes, >60cores, 16GB HBM, 96GB DDR per node 28 PB Local Scratch >700 GB/s 1.5 PB DataWarp >1.5 TB/s 32x FDR IB 5 GB/s 12 GB/s /home HPSS 250 TB NetApp PB stored, 240 PB capacity Data-Intensive Systems PDSF, JGI,KBASE,HEP 14x QDR Vis & Analytics Data Transfer Nodes Adv. Arch. Testbeds Science Gateways Ethernet & IB Fabric Science Friendly Security Production Monitoring Power Efficiency WAN x 10 Gb 1 x 100 Gb Software Defined Networking
7 The Cori System Cori will transition HPC and datacentric workloads to energy efficient architectures System named after Gerty Cori, Biochemist and first American woman to receive the Nobel prize in science
8 DOE facilities are facing a data deluge Astronomy Genomics Climate Physics Light Sources
9 - 9 -
10
11 - 11 -
12 - 12 -
13 - 13 -
14 - 14 -
15 - 15 -
16 - 16 -
17 - 17 -
18 4 V s of Scientific Big Data Science Domain Astronomy Variety Volume Velocity Veracity Multiple Telescopes, multi-band/spectra O(100) TB 100 GB/night 10 TB/night Noisy, acquisition artefacts Light Sources Genomics Multiple imaging modalities Sequencers, Massspec, proteomics O(100) GB 1 Gb/s-1 Tb/s Noisy, sample preparation/acquisition artefacts O(1-10) TB TB/week Missing data, errors High Energy Physics Multiple detectors O(100) TB O(10) PB 1-10 PB/s reduced to GB/s Noisy, artefacts, spatiotemporal Climate Simulations Multi-variate, spatio-temporal O(10) TB 100 GB/s Clean, need to account for multiple sources of uncertainty
19 Why Real-time Analytics? Why Now? Large instruments are producing massive data streams Fast, predictable turnaround is integral to the processing pipeline Traditional HPC systems use batch queues with long or unpredictable wait times Computational Steering <-> Experimental Steering Change experimental configuration during your precious beam-time! Follow-on analysis might be time critical Supernovae candidates, asteroid detection
20 Real-time Use Cases Realtime interaction with experimental facilities Light Sources: ALS, LCLS Realtime jobs driven by web portals OpenMSI, MetAtlas Computational Steering DIII D reactor Experimental Steering iptf follow-on
21 Real-time Queue at NERSC NERSC has made a small pool of nodes available for immediate turnaround / Realtime computing Up to 32 nodes in realtime queue (1024 cores) Realtime nodes have higher priority than other queues Pool can shrink or grow as needed based on demand Approved projects have a small number of nodes available on-demand without queue wait times Configurations on a per-repo basis for Maximum number of jobs Maximum number of cores Wallclock
22 Usage (12/ /2016)
23 Distribution TOTALS: 332,625 hours used 23,244 jobs
24 Science Use Case: iptf Nightly images transferred Subtractions performed Candidates inserted in database Typical turn-around time < 5 minutes DISCOVERIES Yi Cao, et al. (2015) Nature, A strong ultraviolet pulse from a newborn Type Ia supernova PI: Kasliwal, Nugent, Cao
25 Science Use Case: Advanced Light Source Image reconstruction algorithms run on Cori 3D volume rendered on SPOT web portal ALS beamline users receive instant feedback Production running at ALS beamlines: 24x7 Operation 176,293 Datasets 155 Beamline Users 1,050 TB Data Stored 2,379,754 Jobs at NERSC
26 Science Use Case: Metabolite Atlas Pre-computed fragmentation trees for 10,000+ compounds Real-time queue used for comparing raw spectra to trees to obtain possible matches Results obtained in minutes ipython interface to NERSC Ben Bowen, LBL
27 Science Use Case: Cryo-Electron Microscopy Structure determination of TFIID GB image stacks Image classification Real time queue used for Assessment of data quality during electron microscopy data collection Rapid optimization of data processing strategies 3D structure of TFIID-containing complex Nogales Lab Louder et al. (2016), Nature 531 (7596):
28 LCLS Workflow Today: 150 TB Analysis in 5 days HPS S Global Scratch /Project (NGF) stream XTC format Global Scratch /Project (NGF) HPS S DAQ multilevel data acquisition and control system Science DMZ Compute Engine Cray XC30 Cornell SLAC Pixel Array hitfinder hitfinder psana hitfinder Diffraction Detector Injector Prompt analysis requires Fast Networks & Real-time HPC Queues spotfinder index integrate spotfinder index integrate Reconstruction spotfinder index integrate Actionable knowledge for Next Beamtime
29 LCLS-II 2019: Nanocrystallography Pipeline 2GB/s HPC Streaming data from the detector to HPC x data rates Indexing, classification, reconstruction, via on-the-fly veto system Quasi real-time response (<10 min) Terabit/s throughput from front-end electronics Petaflop scale analysis on-demand Indexed Diffraction Image Reconstructed structure
30 Key Takeaways Data streaming and real-time analytics are emerging requirements at NERSC Experimental facilities are heaviest users Light sources, Telescopes SDN capabilities are needed to enable data flows directly between compute node and workflow DBs Users would like to use realtime nodes to do more long-running interactive work/debugging Provisioning resources for real-time queue is an ongoing exercise
31 Acknowledgments Shreyas Cholia Doug Jacobsen (NERSC) NERSC Real-time queue users!
32 Thanks!
NERSC. National Energy Research Scientific Computing Center
NERSC National Energy Research Scientific Computing Center Established 1974, first unclassified supercomputer center Original mission: to enable computational science as a complement to magnetically controlled
More informationNERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber
NERSC Site Update National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory Richard Gerber NERSC Senior Science Advisor High Performance Computing Department Head Cori
More informationLustre architecture for Riccardo Veraldi for the LCLS IT Team
Lustre architecture for LCLS@SLAC Riccardo Veraldi for the LCLS IT Team 2 LCLS Experimental Floor 3 LCLS Parameters 4 LCLS Physics LCLS has already had a significant impact on many areas of science, including:
More informationCori (2016) and Beyond Ensuring NERSC Users Stay Productive
Cori (2016) and Beyond Ensuring NERSC Users Stay Productive Nicholas J. Wright! Advanced Technologies Group Lead! Heterogeneous Mul-- Core 4 Workshop 17 September 2014-1 - NERSC Systems Today Edison: 2.39PF,
More informationDVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011
DVS, GPFS and External Lustre at NERSC How It s Working on Hopper Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011 1 NERSC is the Primary Computing Center for DOE Office of Science NERSC serves
More informationPerformance and Energy Usage of Workloads on KNL and Haswell Architectures
Performance and Energy Usage of Workloads on KNL and Haswell Architectures Tyler Allen 1 Christopher Daley 2 Doug Doerfler 2 Brian Austin 2 Nicholas Wright 2 1 Clemson University 2 National Energy Research
More informationALICE Grid Activities in US
ALICE Grid Activities in US 1 ALICE-USA Computing Project ALICE-USA Collaboration formed to focus on the ALICE EMCal project Construction, installation, testing and integration participating institutions
More informationHPC Capabilities at Research Intensive Universities
HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)
More informationTuning I/O Performance for Data Intensive Computing. Nicholas J. Wright. lbl.gov
Tuning I/O Performance for Data Intensive Computing. Nicholas J. Wright njwright @ lbl.gov NERSC- National Energy Research Scientific Computing Center Mission: Accelerate the pace of scientific discovery
More informationAmazon Web Services: Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud
Amazon Web Services: Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud Summarized by: Michael Riera 9/17/2011 University of Central Florida CDA5532 Agenda
More informationEvaluation of Parallel I/O Performance and Energy with Frequency Scaling on Cray XC30 Suren Byna and Brian Austin
Evaluation of Parallel I/O Performance and Energy with Frequency Scaling on Cray XC30 Suren Byna and Brian Austin Lawrence Berkeley National Laboratory Energy efficiency at Exascale A design goal for future
More informationManaging HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory
Managing HPC Active Archive Storage with HPSS RAIT at Oak Ridge National Laboratory Quinn Mitchell HPC UNIX/LINUX Storage Systems ORNL is managed by UT-Battelle for the US Department of Energy U.S. Department
More informationEnabling a SuperFacility with Software Defined Networking
Enabling a SuperFacility with Software Defined Networking Richard Shane Canon, Tina Declerck, Brent Draney, Jason Lee, David Paul, David Skinner NERSC, Lawrence Berkeley National Laboratory Berkeley, USA
More informationMagellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009
Magellan Project Jeff Broughton NERSC Systems Department Head October 7, 2009 1 Magellan Background National Energy Research Scientific Computing Center (NERSC) Argonne Leadership Computing Facility (ALCF)
More informationShort Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy
Short Talk: System abstractions to facilitate data movement in supercomputers with deep memory and interconnect hierarchy François Tessier, Venkatram Vishwanath Argonne National Laboratory, USA July 19,
More informationConference The Data Challenges of the LHC. Reda Tafirout, TRIUMF
Conference 2017 The Data Challenges of the LHC Reda Tafirout, TRIUMF Outline LHC Science goals, tools and data Worldwide LHC Computing Grid Collaboration & Scale Key challenges Networking ATLAS experiment
More informationThe Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research
The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research Dr Paul Calleja Director of Research Computing University of Cambridge Global leader in science & technology
More informationNetwork Support for Data Intensive Science
Network Support for Data Intensive Science Eli Dart, Network Engineer ESnet Network Engineering Group ARN2 Workshop Washington, DC April 18, 2013 Overview Drivers Sociology Path Forward 4/19/13 2 Exponential
More informationLinac Coherent Light Source (LCLS) Data Transfer Requirements
Linac Coherent Light Source (LCLS) Data Transfer Requirements Dr. Les Cottrell, SLAC HPC talk Stanford Feb 2018 LCLS-II, a major (~ B$) upgrade to LCLS is currently underway.
More informationHPC Storage Use Cases & Future Trends
Oct, 2014 HPC Storage Use Cases & Future Trends Massively-Scalable Platforms and Solutions Engineered for the Big Data and Cloud Era Atul Vidwansa Email: atul@ DDN About Us DDN is a Leader in Massively
More informationData Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures. 13 November 2016
National Aeronautics and Space Administration Data Analytics and Storage System (DASS) Mixing POSIX and Hadoop Architectures 13 November 2016 Carrie Spear (carrie.e.spear@nasa.gov) HPC Architect/Contractor
More informationOverview. Idea: Reduce CPU clock frequency This idea is well suited specifically for visualization
Exploring Tradeoffs Between Power and Performance for a Scientific Visualization Algorithm Stephanie Labasan & Matt Larsen (University of Oregon), Hank Childs (Lawrence Berkeley National Laboratory) 26
More informationCustomer Success Story Los Alamos National Laboratory
Customer Success Story Los Alamos National Laboratory Panasas High Performance Storage Powers the First Petaflop Supercomputer at Los Alamos National Laboratory Case Study June 2010 Highlights First Petaflop
More informationPresent and Future Leadership Computers at OLCF
Present and Future Leadership Computers at OLCF Al Geist ORNL Corporate Fellow DOE Data/Viz PI Meeting January 13-15, 2015 Walnut Creek, CA ORNL is managed by UT-Battelle for the US Department of Energy
More informationNAMD Performance Benchmark and Profiling. January 2015
NAMD Performance Benchmark and Profiling January 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource
More informationEnabling a SuperFacility with Software Defined Networking
Enabling a SuperFacility with Software Defined Networking Shane Canon Tina Declerck, Brent Draney, Jason Lee, David Paul, David Skinner May 2017 CUG 2017-1 - SuperFacility - Defined Combining the capabilities
More informationBig Data in HPC. John Shalf Lawrence Berkeley National Laboratory
Big Data in HPC John Shalf Lawrence Berkeley National Laboratory 1 Evolving Role of Supercomputing Centers Traditional Pillars of science Theory: mathematical models of nature Experiment: empirical data
More informationDDN s Vision for the Future of Lustre LUG2015 Robert Triendl
DDN s Vision for the Future of Lustre LUG2015 Robert Triendl 3 Topics 1. The Changing Markets for Lustre 2. A Vision for Lustre that isn t Exascale 3. Building Lustre for the Future 4. Peak vs. Operational
More informationScientific Computing at SLAC. Amber Boehnlein
Scientific Computing at SLAC Amber Boehnlein Amber Boehnlein Head of Scientific Computing (4/25/11) SLAC History: FNAL D0 collaboration Running experiments Department Head Simulation Department Head DOE
More informationX-ray imaging software tools for HPC clusters and the Cloud
X-ray imaging software tools for HPC clusters and the Cloud Darren Thompson Application Support Specialist 9 October 2012 IM&T ADVANCED SCIENTIFIC COMPUTING NeAT Remote CT & visualisation project Aim:
More informationUQ Infrastructure for Data Intensive Science
UQ Infrastructure for Data Intensive Science David Abramson Director, Research Computing Centre Professor of Computer Science University of Queensland david.abramson@uq.edu.au RCC CMM CAI Science AIBN
More informationMotivation Goal Idea Proposition for users Study
Exploring Tradeoffs Between Power and Performance for a Scientific Visualization Algorithm Stephanie Labasan Computer and Information Science University of Oregon 23 November 2015 Overview Motivation:
More informationThe Data exacell DXC. J. Ray Scott DXC PI May 17, 2016
The Data exacell DXC J. Ray Scott DXC PI May 17, 2016 DXC Leadership Mike Levine Co-Scientific Director Co-PI Nick Nystrom Senior Director of Research Co-PI Ralph Roskies Co-Scientific Director Co-PI Robin
More informationA declarative programming style job submission filter.
A declarative programming style job submission filter. Douglas Jacobsen Computational Systems Group Lead NERSC -1- Slurm User Group 2018 NERSC Vital Statistics 860 projects 7750 users Edison NERSC-7 Cray
More informationToward portable I/O performance by leveraging system abstractions of deep memory and interconnect hierarchies
Toward portable I/O performance by leveraging system abstractions of deep memory and interconnect hierarchies François Tessier, Venkatram Vishwanath, Paul Gressier Argonne National Laboratory, USA Wednesday
More informationStorage Supporting DOE Science
Storage Supporting DOE Science Jason Hick jhick@lbl.gov NERSC LBNL http://www.nersc.gov/nusers/systems/hpss/ http://www.nersc.gov/nusers/systems/ngf/ May 12, 2011 The Production Facility for DOE Office
More informationCSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science
CSD3 The Cambridge Service for Data Driven Discovery A New National HPC Service for Data Intensive science Dr Paul Calleja Director of Research Computing University of Cambridge Problem statement Today
More informationA Breakthrough in Non-Volatile Memory Technology FUJITSU LIMITED
A Breakthrough in Non-Volatile Memory Technology & 0 2018 FUJITSU LIMITED IT needs to accelerate time-to-market Situation: End users and applications need instant access to data to progress faster and
More informationStream Processing for Remote Collaborative Data Analysis
Stream Processing for Remote Collaborative Data Analysis Scott Klasky 146, C. S. Chang 2, Jong Choi 1, Michael Churchill 2, Tahsin Kurc 51, Manish Parashar 3, Alex Sim 7, Matthew Wolf 14, John Wu 7 1 ORNL,
More informationInternational Big Science Coming to Your Campus Soon (Sooner Than You Think )
International Big Science Coming to Your Campus Soon (Sooner Than You Think ) Lauren Rotman ESnet Science Engagement Group Lead April 7, 2014 ESnet Supports DOE Office of Science Office of Science provides
More informationHPC SERVICE PROVISION FOR THE UK
HPC SERVICE PROVISION FOR THE UK 5 SEPTEMBER 2016 Dr Alan D Simpson ARCHER CSE Director EPCC Technical Director Overview Tiers of HPC Tier 0 PRACE Tier 1 ARCHER DiRAC Tier 2 EPCC Oxford Cambridge UCL Tiers
More informationGPFS for Life Sciences at NERSC
GPFS for Life Sciences at NERSC A NERSC & JGI collaborative effort Jason Hick, Rei Lee, Ravi Cheema, and Kjiersten Fagnan GPFS User Group meeting May 20, 2015-1 - Overview of Bioinformatics - 2 - A High-level
More informationGPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations
GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations Argonne National Laboratory Argonne National Laboratory is located on 1,500
More informationEngagement With Scientific Facilities
Engagement With Scientific Facilities Eli Dart, Network Engineer ESnet Science Engagement Lawrence Berkeley National Laboratory Global Science Engagement Panel Internet2 Technology Exchange San Francisco,
More informationHandling and Processing Big Data for Biomedical Discovery with MATLAB
Handling and Processing Big Data for Biomedical Discovery with MATLAB Raphaël Thierry, PhD Image processing and analysis Software development Facility for Advanced Microscopy and Imaging 23 th June 2016
More informationPython at NERSC. Rollin Thomas NERSC Data and Analytics Services IXPUG
Python at NERSC Rollin Thomas NERSC Data and Analytics Services IXPUG 2018-05-10 Outline 1. Python enables HPC science at NERSC Orchestration Workflows Analytics HPC Apps 2. How we help Python users at
More informationirods at TACC: Secure Infrastructure for Open Science Chris Jordan
irods at TACC: Secure Infrastructure for Open Science Chris Jordan What is TACC? Texas Advanced Computing Center Cyberinfrastructure Resources for Open Science University of Texas System 9 Academic, 6
More informationMILC Performance Benchmark and Profiling. April 2013
MILC Performance Benchmark and Profiling April 2013 Note The following research was performed under the HPC Advisory Council activities Special thanks for: HP, Mellanox For more information on the supporting
More informationGateways to Discovery: Cyberinfrastructure for the Long Tail of Science
Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.
More informationMetadata Models for Experimental Science Data Management
Metadata Models for Experimental Science Data Management Brian Matthews Facilities Programme Manager Scientific Computing Department, STFC Co-Chair RDA Photon and Neutron Science Interest Group Task lead,
More informationGen-Z Memory-Driven Computing
Gen-Z Memory-Driven Computing Our vision for the future of computing Patrick Demichel Distinguished Technologist Explosive growth of data More Data Need answers FAST! Value of Analyzed Data 2005 0.1ZB
More informationMassively Parallel K-Nearest Neighbor Computation on Distributed Architectures
Massively Parallel K-Nearest Neighbor Computation on Distributed Architectures Mostofa Patwary 1, Nadathur Satish 1, Narayanan Sundaram 1, Jilalin Liu 2, Peter Sadowski 2, Evan Racah 2, Suren Byna 2, Craig
More informationAdvanced Photon Source Data Management. S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES)
Advanced Photon Source Data Management S. Veseli, N. Schwarz, C. Schmitz (SDM/XSD) R. Sersted, D. Wallis (IT/AES) APS Data Management - Globus World 2018 Growing Beamline Data Needs X-ray detector capabilities
More informationInteractively Visualizing Science at Scale
Interactively Visualizing Science at Scale Kelly Gaither Director of Visualization/Senior Research Scientist Texas Advanced Computing Center November 13, 2012 Issues and Concerns Maximizing Scientific
More informationSDS: A Framework for Scientific Data Services
SDS: A Framework for Scientific Data Services Bin Dong, Suren Byna*, John Wu Scientific Data Management Group Lawrence Berkeley National Laboratory Finding Newspaper Articles of Interest Finding news articles
More informationlibhio: Optimizing IO on Cray XC Systems With DataWarp
libhio: Optimizing IO on Cray XC Systems With DataWarp May 9, 2017 Nathan Hjelm Cray Users Group May 9, 2017 Los Alamos National Laboratory LA-UR-17-23841 5/8/2017 1 Outline Background HIO Design Functionality
More informationSENSE: SDN for End-to-end Networked Science at the Exascale
SENSE: SDN for End-to-end Networked Science at the Exascale SENSE Research Team INDIS Workshop, SC18 November 11, 2018 Dallas, Texas SENSE Team Sponsor Advanced Scientific Computing Research (ASCR) ESnet
More informationLightweight Streaming-based Runtime for Cloud Computing. Shrideep Pallickara. Community Grids Lab, Indiana University
Lightweight Streaming-based Runtime for Cloud Computing granules Shrideep Pallickara Community Grids Lab, Indiana University A unique confluence of factors have driven the need for cloud computing DEMAND
More information2013 AWS Worldwide Public Sector Summit Washington, D.C.
2013 AWS Worldwide Public Sector Summit Washington, D.C. EMR for Fun and for Profit Ben Butler Sr. Manager, Big Data butlerb@amazon.com @bensbutler Overview 1. What is big data? 2. What is AWS Elastic
More informationTowards a Strategy for Data Sciences at UW
Towards a Strategy for Data Sciences at UW Albrecht Karle Department of Physics June 2017 High performance compu0ng infrastructure: Perspec0ves from Physics Exis0ng infrastructure and projected future
More informationOn the Use of Burst Buffers for Accelerating Data-Intensive Scientific Workflows
On the Use of Burst Buffers for Accelerating Data-Intensive Scientific Workflows Rafael Ferreira da Silva, Scott Callaghan, Ewa Deelman 12 th Workflows in Support of Large-Scale Science (WORKS) SuperComputing
More informationThe Computation and Data Needs of Canadian Astronomy
Summary The Computation and Data Needs of Canadian Astronomy The Computation and Data Committee In this white paper, we review the role of computing in astronomy and astrophysics and present the Computation
More informationSLIDE 1 - COPYRIGHT 2015 ELEPHANT FLOWS IN THE ROOM: SCIENCEDMZ NATIONALLY DISTRIBUTED
SLIDE 1 - COPYRIGHT 2015 ELEPHANT FLOWS IN THE ROOM: SCIENCEDMZ NATIONALLY DISTRIBUTED SLIDE 2 - COPYRIGHT 2015 Do you know what your campus network is actually capable of? (i.e. have you addressed your
More informationA New NSF TeraGrid Resource for Data-Intensive Science
A New NSF TeraGrid Resource for Data-Intensive Science Michael L. Norman Principal Investigator Director, SDSC Allan Snavely Co-Principal Investigator Project Scientist Slide 1 Coping with the data deluge
More informationFuncX: A Function Serving Platform for HPC. Ryan Chard 28 Jan 2019
FuncX: A Function Serving Platform for HPC Ryan Chard 28 Jan 2019 Outline - Motivation FuncX: FaaS for HPC Implementation status Preliminary applications - Machine learning inference Automating analysis
More informationNetApp: Solving I/O Challenges. Jeff Baxter February 2013
NetApp: Solving I/O Challenges Jeff Baxter February 2013 1 High Performance Computing Challenges Computing Centers Challenge of New Science Performance Efficiency directly impacts achievable science Power
More informationHSC Data Processing Environment ~Upgrading plan of open-use computing facility for HSC data
HSC Data Processing Environment ~Upgrading plan of open-use computing facility for HSC data Hisanori Furusawa (NAOJ) 2018.1.17 Subaru Users Meeting 1 Missions of HSC Data Processing in NAOJ Support General
More informationOverlapping Computation and Communication for Advection on Hybrid Parallel Computers
Overlapping Computation and Communication for Advection on Hybrid Parallel Computers James B White III (Trey) trey@ucar.edu National Center for Atmospheric Research Jack Dongarra dongarra@eecs.utk.edu
More informationI/O at the Center for Information Services and High Performance Computing
Mich ael Kluge, ZIH I/O at the Center for Information Services and High Performance Computing HPC-I/O in the Data Center Workshop @ ISC 2015 Zellescher Weg 12 Willers-Bau A 208 Tel. +49 351-463 34217 Michael
More informationHPC Saudi Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences. Presented to: March 14, 2017
Creating an Exascale Ecosystem for Science Presented to: HPC Saudi 2017 Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences March 14, 2017 ORNL is managed by UT-Battelle
More informationAdaptive selfcalibration for Allen Telescope Array imaging
Adaptive selfcalibration for Allen Telescope Array imaging Garrett Keating, William C. Barott & Melvyn Wright Radio Astronomy laboratory, University of California, Berkeley, CA, 94720 ABSTRACT Planned
More informationTECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING
TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Table of Contents: The Accelerated Data Center Optimizing Data Center Productivity Same Throughput with Fewer Server Nodes
More informationHarmonia: An Interference-Aware Dynamic I/O Scheduler for Shared Non-Volatile Burst Buffers
I/O Harmonia Harmonia: An Interference-Aware Dynamic I/O Scheduler for Shared Non-Volatile Burst Buffers Cluster 18 Belfast, UK September 12 th, 2018 Anthony Kougkas, Hariharan Devarajan, Xian-He Sun,
More informationNERSC Site Report One year of Slurm Douglas Jacobsen NERSC. SLURM User Group 2016
NERSC Site Report One year of Slurm Douglas Jacobsen NERSC SLURM User Group 2016 NERSC Vital Statistics 860 active projects 7,750 active users 700+ codes both established and in-development migrated production
More informationCHARACTERIZING HPC I/O: FROM APPLICATIONS TO SYSTEMS
erhtjhtyhy CHARACTERIZING HPC I/O: FROM APPLICATIONS TO SYSTEMS PHIL CARNS carns@mcs.anl.gov Mathematics and Computer Science Division Argonne National Laboratory April 20, 2017 TU Dresden MOTIVATION FOR
More information1. Many Core vs Multi Core. 2. Performance Optimization Concepts for Many Core. 3. Performance Optimization Strategy for Many Core
1. Many Core vs Multi Core 2. Performance Optimization Concepts for Many Core 3. Performance Optimization Strategy for Many Core 4. Example Case Studies NERSC s Cori will begin to transition the workload
More informationSTAR-CCM+ Performance Benchmark and Profiling. July 2014
STAR-CCM+ Performance Benchmark and Profiling July 2014 Note The following research was performed under the HPC Advisory Council activities Participating vendors: CD-adapco, Intel, Dell, Mellanox Compute
More informationProgrammable Information Highway (with no Traffic Jams)
Programmable Information Highway (with no Traffic Jams) Inder Monga Energy Sciences Network Scientific Networking Division Lawrence Berkeley National Lab Exponential Growth ESnet Accepted Traffic: Jan
More informationPerformance quality monitoring system (PQM) for the Daya Bay experiment
Performance quality monitoring system (PQM) for the Daya Bay experiment LIU Yingbiao Institute of High Energy Physics On behalf of the Daya Bay Collaboration ACAT2013, Beijing, May 16-21, 2013 2 The Daya
More informationSmart Trading with Cray Systems: Making Smarter Models + Better Decisions in Algorithmic Trading
Smart Trading with Cray Systems: Making Smarter Models + Better Decisions in Algorithmic Trading Smart Trading with Cray Systems Agenda: Cray Overview Market Trends & Challenges Mitigating Risk with Deeper
More informationA Big Big Data Platform
A Big Big Data Platform John Urbanic, Parallel Computing Scientist 2017 Pittsburgh Supercomputing Center The Shift to Big Data New Emphases Pan-STARRS telescope http://pan-starrs.ifa.hawaii.edu/public/
More informationDAQ system at SACLA and future plan for SPring-8-II
DAQ system at SACLA and future plan for SPring-8-II Takaki Hatsui T. Kameshima, Nakajima T. Abe, T. Sugimoto Y. Joti, M.Yamaga RIKEN SPring-8 Center IFDEPS 1 Evolution of Computing infrastructure from
More informationGPFS on a Cray XT. Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009
GPFS on a Cray XT Shane Canon Data Systems Group Leader Lawrence Berkeley National Laboratory CUG 2009 Atlanta, GA May 4, 2009 Outline NERSC Global File System GPFS Overview Comparison of Lustre and GPFS
More informationAPI and Usage of libhio on XC-40 Systems
API and Usage of libhio on XC-40 Systems May 24, 2018 Nathan Hjelm Cray Users Group May 24, 2018 Los Alamos National Laboratory LA-UR-18-24513 5/24/2018 1 Outline Background HIO Design HIO API HIO Configuration
More informationTESLA P100 PERFORMANCE GUIDE. HPC and Deep Learning Applications
TESLA P PERFORMANCE GUIDE HPC and Deep Learning Applications MAY 217 TESLA P PERFORMANCE GUIDE Modern high performance computing (HPC) data centers are key to solving some of the world s most important
More informationIBM CORAL HPC System Solution
IBM CORAL HPC System Solution HPC and HPDA towards Cognitive, AI and Deep Learning Deep Learning AI / Deep Learning Strategy for Power Power AI Platform High Performance Data Analytics Big Data Strategy
More informationUser s Perspective for Ten Gigabit
User s Perspective for Ten Gigabit Ethernet Michael Bennett Lawrence Berkeley National Lab IEEE HSSG meeting Coer d Alene, Idaho 1-4 June 1999 Background About LBNL Leading edge research in the biological,
More informationWELCOME TO THE JUNGLE!
WELCOME TO THE JUNGLE! The challenges of data-driven science at high rates Michael Bussmann Helmholtz-Zentrum Dresden Rossendorf www.helmholtz.de DATA-DRIVEN SCIENCE (NOT JUST ANOTHER ML TALK) Knowledge
More informationLBRN - HPC systems : CCT, LSU
LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit
More informationImproving Packet Processing Performance of a Memory- Bounded Application
Improving Packet Processing Performance of a Memory- Bounded Application Jörn Schumacher CERN / University of Paderborn, Germany jorn.schumacher@cern.ch On behalf of the ATLAS FELIX Developer Team LHCb
More informationThe BioHPC Nucleus Cluster & Future Developments
1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does
More informationApplication and System Memory Use, Configuration, and Problems on Bassi. Richard Gerber
Application and System Memory Use, Configuration, and Problems on Bassi Richard Gerber Lawrence Berkeley National Laboratory NERSC User Services ScicomP 13, Garching, Germany, July 17, 2007 NERSC is supported
More informationHeadline in Arial Bold 30pt. Visualisation using the Grid Jeff Adie Principal Systems Engineer, SAPK July 2008
Headline in Arial Bold 30pt Visualisation using the Grid Jeff Adie Principal Systems Engineer, SAPK July 2008 Agenda Visualisation Today User Trends Technology Trends Grid Viz Nodes Software Ecosystem
More informationBROCADE CLOUD-OPTIMIZED NETWORKING: THE BLUEPRINT FOR THE SOFTWARE-DEFINED NETWORK
BROCADE CLOUD-OPTIMIZED NETWORKING: THE BLUEPRINT FOR THE SOFTWARE-DEFINED NETWORK Ken Cheng VP, Service Provider and Application Delivery Products September 12, 2012 Brocade Cloud-Optimized Networking
More informationUniversity at Buffalo Center for Computational Research
University at Buffalo Center for Computational Research The following is a short and long description of CCR Facilities for use in proposals, reports, and presentations. If desired, a letter of support
More informationMonash High Performance Computing
MONASH eresearch Monash High Performance Computing Gin Tan Senior HPC Consultant MeRC (Monash eresearch) Monash HPC Infrastructure MASSIVE MonARCH Characterisation VL and Instruments MASSIVE-3 MeRC Infrastructure
More informationInsight: that s for NSA Decision making: that s for Google, Facebook. so they find the best way to push out adds and products
What is big data? Big data is high-volume, high-velocity and high-variety information assets that demand cost-effective, innovative forms of information processing for enhanced insight and decision making.
More informationSaving Energy with Free Cooling and How Well It Works
Saving Energy with Free Cooling and How Well It Works Brent Draney HPC Facility Integration Lead Data Center Efficiency Summit 2014-1 - NERSC is the primary computing facility for DOE Office of Science
More informationParallel File Systems. John White Lawrence Berkeley National Lab
Parallel File Systems John White Lawrence Berkeley National Lab Topics Defining a File System Our Specific Case for File Systems Parallel File Systems A Survey of Current Parallel File Systems Implementation
More informationHIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS
HIGH-PERFORMANCE STORAGE FOR DISCOVERY THAT SOARS OVERVIEW When storage demands and budget constraints collide, discovery suffers. And it s a growing problem. Driven by ever-increasing performance and
More information