A Converged HPC & Big Data Architecture in Production

Size: px
Start display at page:

Download "A Converged HPC & Big Data Architecture in Production"

Transcription

1 A Converged HPC & Big Data Architecture in Production Nick Nystrom Sr. Director of Research & Bridges PI/PD HP-CAST Invited Keynote November 11, 2016 Salt Lake City 2016 Pittsburgh Supercomputing Center

2 Bridges, a new kind of supercomputer at PSC, is designed to empower new research communities, converge HPC and High Performance Data Analytics (HPDA), bring desktop convenience to high-end computing, expand campus access, and help researchers to work more intuitively. Bridges entered production in July, Funded by a $17.2M NSF award, Bridges emphasizes usability, flexibility, and interactivity Available at no charge for open research and course support Popular programming languages and applications: R, MATLAB, Python, Hadoop, Spark, 846 compute nodes: 128GB (800), 3TB (42) & 12TB (4) of RAM each; 96 GPUs (64 P100, 32 K80) Dedicated nodes for persistent databases, web servers, and distributed services The first deployment of Intel s Omni-Path Architecture fabric

3 Acquisition and operation of Bridges are made possible by the $17.2M National Science Foundation award #ACI : Bridges: From Communities and Data to Workflows and Insight is delivering Bridges All trademarks, service marks, trade names, trade dress, product names, and logos appearing herein are the property of their respective owners. 3

4 Gaining Access to Bridges Bridges is allocated through XSEDE: Startup Allocation Can request anytime; requires only a brief proposal/abstract Up to 50,000 core-hours on RSM and GPU (128GB) nodes, XSEDE calls these 1,000 TB-hours on LSM (3TB) and ESM (12TB) nodes, Service Units (SUs) and a user-specified (in GB) amount of storage on Pylon Can request XSEDE ECSS (Extended Collaborative Support Service) Research Allocation (XRAC) Appropriate for larger requests; can request ECSS Community allocations can support gateways Can be up to millions to tens of millions of SUs Quarterly submission windows: March 15 April 15, June 15 July 15, September 15 October 15, December 15 January 15 Educational Allocations To support use of Bridges for courses Up to 10% of Bridges SUs are available on a discretionary basis to industrial affiliates, Pennsylvania-based researchers, and others to foster discovery and innovation and broaden participation in data-intensive computing. 4

5 Data Infrastructure for the National Advanced Cyberinfrastructure Ecosystem Bridges is a new resource on XSEDE and can interoperate with other XSEDE resources, Advanced Cyberinfrastructure (ACI) projects, campuses, and instruments nationwide. Examples: Reconstructing brain circuits from highresolution electron microscopy Carnegie Mellon University s Gates Center for Computer Science 5 High-throughput genome sequencers Social networks and the Internet Data Infrastructure Building Blocks (DIBBs) Data Exacell (DXC) Integrating Geospatial Capabilities into HUBzero Building a Scalable Infrastructure for Data- Driven Discovery & Innovation in Education Other DIBBs projects Other ACI projects Temple University s new Science, Education, and Research Center

6 Motivating Use Cases (examples) Data-intensive applications & workflows Gateways the power of HPC without the programming Shared data collections & related analysis tools Cross-domain analytics Deep learning Graph analytics, machine learning, genome sequence assembly, and other large-memory applications Scaling beyond the laptop Scaling research to teams and collaborations In-memory databases Optimization & parameter sweeps Distributed & service-oriented architectures Data assimilation from large instruments and Internet data Leveraging an extensive collection of interoperating software Converging HPC and Big Data Research areas that haven t used HPC New approaches to traditional HPC fields, for example, machine learning Coupling applications in novel ways Leveraging large memory, GPUs, and high bandwidth 6

7 Objectives and Approach Bring HPC to nontraditional users and research communities. Apply HPC effectively to big data. Bridge to campuses to streamline access and provide cloud-like burst capability. Leveraging PSC s expertise with shared memory, Bridges will feature 3 tiers of large, coherent shared-memory nodes: 12TB, 3TB, and 128GB. Bridges implements a uniquely flexible environment featuring interactivity, gateways, databases, distributed (web) services, high-productivity programming languages and frameworks, and virtualization, and campus bridging. 7

8 Interactivity Interactivity is the feature most frequently requested by nontraditional HPC communities. Interactivity provides immediate feedback for doing exploratory data analytics and testing hypotheses. Bridges offers interactivity through a combination of shared and dedicated resources to maximize availability while accommodating different needs. 8

9 Gateways and Tools for Building Them Gateways provide easy-to-use access to Bridges HPC and data resources, allowing users to launch jobs, orchestrate complex workflows, and manage data from their browsers. - Provide HPC Software-as-a-Service - Extensive use of VMs, databases, and distributed services Galaxy Col*Fusion portal for the systematic accumulation, integration, and utilization of historical data, from Interactive pipeline creation in GenePattern (Broad Institute) 9

10 Virtualization and Containers Virtual Machines (VMs) enable flexibility, security, customization, reproducibility, ease of use, and interoperability with other services. User demand is for custom database and web server installations to develop data-intensive, distributed applications and containers for custom software stacks and portability. Bridges leverages OpenStack to provision resources, between interactive, batch, Hadoop, and VM uses. 10

11 High-Productivity Programming Supporting languages that communities already use is vital for them to apply HPC to their research questions. 11

12 Spark, Hadoop & Related Approaches Bridges large memory is great for Spark! Bridges enables workflows that integrate Spark/Hadoop, HPC, GPU, and/or large shared-memory components. 12

13 Campus Bridging Federated identity management Burst offload Through a pilot project with Temple University, the Bridges project will explore new ways to transition data and computing seamlessly between campus and XSEDE resources. Federated identity management will allow users to use their local credentials for single sign-on to remote resources, facilitating data transfers between Bridges and Temple s local storage systems. Burst offload will enable cloud-like offloading of jobs from Temple to Bridges and vice versa during periods of unusually heavy load. 13

14 20 Storage Building Blocks, implementing the parallel Pylon storage system (10 PB usable) 4 MDS nodes 2 front-end nodes 2 boot nodes 8 management nodes 6 core Intel OPA edge switches: fully interconnected, 2 links per switch Intel OPA cables 4 HPE Integrity Superdome X (12TB) compute nodes each with 2 OPA IB gateway nodes 42 HPE ProLiant DL580 (3TB) compute nodes 12 HPE ProLiant DL380 database nodes 6 HPE ProLiant DL360 web server nodes 20 leaf Intel OPA edge switches 32 RSM nodes, each with 2 NVIDIA Tesla P100 GPUs 800 HPE Apollo 2000 (128GB) compute nodes 16 RSM nodes, each with 2 NVIDIA Tesla K80 GPUs Purpose-built Intel Omni-Path Architecture topology for data-intensive HPC 14 Bridges Virtual Tour:

15 Type RAM a Ph n CPU / GPU / other Server ESM LSM 12 TB 3TB Intel Xeon E v3 (18c, 2.3/3.1 GHz, 45MB LLC) HPE Integrity Intel Xeon E v4 (22c, 2.2/3.3 GHz, 55MB LLC) Superdome X Intel Xeon E v3 (16c, 2.2/3.2 GHz, 40 MB LLC) Intel Xeon E v4 (20c, 2.1/3.0 GHz, 50MB LLC) RSM 128 GB Intel Xeon E v3 (14c, 2.3/3.3 GHz, 35MB LLC) RSM-GPU 128 GB Intel Xeon E v3 + 2 NVIDIA Tesla K Intel Xeon E v4 (16c, 2.1/3.0 GHz, 40MB LLC) + 2 NVIDIA Tesla P100 HPE ProLiant DL580 HPE Apollo 2000 DB-s 6 2 Intel Xeon E v3 + SSD HPE ProLiant DL GB 1 DB-h 6 2 Intel Xeon E v3 + HDDs HPE ProLiant DL380 Web 128 GB Intel Xeon E v3 HPE ProLiant DL360 Other b 128 GB Intel Xeon E v3 Gateway Storage 128 GB 128 GB Total 282 TB 908 Node Types HPE ProLiant DL360, HPE ProLiant DL380 2 Intel Xeon E v3 HPE ProLiant DL Intel Xeon E v3 (12c, 2.5/3.3 GHz, 30MB LLC) Intel Xeon E v4 (14c, 2.4/3.3 GHz, 35MB LLC) Supermicro X10DRi a. RAM in Bridges Intel Xeon v3 nodes is DDR4-2133, and RAM in Bridges Intel Xeon v4 nodes is DDR b. Other nodes = front end (2) + management/log (8) + boot (4) + MDS (4) 15

16 System Capacities Phase 1 Phase 2 Technical Upgrade Compute Nodes Peak fp64 (Tf/s) RSM RSM GPU LSM ESM Ph. 1 Total RSM GPU LSM ESM Ph. 2 Total Total RAM (TB) HDD (TB) Phase 1 Phase 2 Total Storage Pylon PB, raw PB, usable Local PB, raw Excludes utility nodes (database, web server, front-end, management, storage, etc. 16

17 GPU Nodes Bridges GPUs are intended to accelerate both deep learning and simulation codes Phase 1: 16 nodes, each with: 2 NVIDIA Tesla K80 GPUs (32 total) 2 Intel Xeon E v3 (14c, 2.3/3.3 GHz) 128GB DDR RAM Phase 2: +32 nodes, each with: 2 NVIDIA Tesla P100 GPUs (64 total) 2 Intel Xeon E v4 (16c, 2.1/3.0 GHz) 128GB DDR RAM 17

18 NVIDIA Tesla K80 Kepler architecture 2496 CUDA cores (128/SM) 7.08B transistors on 561mm 2 die (28nm) 2 24 GB GDDR5; GB/s 562 MHz base 876 MHz boost 2.91 Tf/s (64b), 8.73 Tf/s 32b Bridges: 32 K80 GPUs 279 Tf/s (32b) 18

19 NVIDIA Tesla P100 Pascal architecture 3584 CUDA cores (64/SM) 15.3B transistors on 610mm 2 die (16nm) 16GB CoWoS HBM2 at 720 GB/s w/ ECC 4.7 Tf/s (64b), (9.3) Tf/s 32b, (18.7) Tf/s 16b Page migration engine improves unified memory Bridges: 64 P100 GPUs 600 Tf/s (32b) From Bridges results forthcoming. 19

20 Database and Web Server Nodes Dedicated database nodes will power persistent relational and NoSQL databases Support data management and data-driven workflows SSDs for high IOPs; HDDs for high capacity (examples) Dedicated web server nodes Enable distributed, service-oriented architectures High-bandwidth connections to XSEDE and the Internet 20

21 Data Management Pylon: A large, central, high-performance storage system 10 PB usable Visible to all compute nodes Currently implemented as two complementary file systems: /pylon1: Lustre, targeted for $SCRATCH use; non-wiped directories available by request /pylon2: SLASH2, targeted for large datasets, community repositories, and distributed clients Distributed (node-local) storage Enhance application portability Improve overall system performance Improve performance consistency to the shared filesystem Aggregate 7.2 PB (6 PB non-gpu RSM: Hadoop, Spark) 21

22 Intel Omni-Path Architecture (OPA) Bridges is the first production deployment of Omni-Path Omni-Path connects all nodes and the shared filesystem, providing Bridges and its users with: 100 Gbps line speed per port; 25 GB/s bidirectional bandwidth per port Measured 0.93μs latency, GB/s/dir 160M MPI messages per second 48-port edge switch reduces interconnect complexity and cost HPC performance, reliability, and QoS OFA-compliant applications supported without modification Early access to this new, important, forward-looking technology Bridges deploys OPA in a two-tier island (leaf-spine) topology developed by PSC for cost-effective, data-intensive HPC 22

23 Omni-Path Architecture User Group (OPUG) OPUG meeting at SC16 November 17, 1:30-3:00p, Room 155-A Opportunity for OPA early adopters to share experiences Led by PSC Bridges is the first large-scale deployment of Omni-Path Initial conversations on advanced interconnect technology date back to 2007 Sign up at PSC OPUG contacts Dave Moses J. Ray Scott Nick Nystrom 23

24 MIDAS held its first Public Health Hackathon at PSC The Challenge: Develop applications to visualize public health data Participants: 12 teams from across the country and India competed Given early access to Bridges nodes for analytics Sponsored by Intel Omni-Path and Dell First Place Entry: SPEW VIEW created by Shannon Ghallager and Lee Richardson at Carnegie Mellon University. Full application available at 24

25 Example: Causal Discovery Portal Center for Causal Discovery, an NIH Big Data to Knowledge Center of Excellence Browser-based UI Prepare and upload data Run causal Internet discovery algorithms Visualize results Web node VM Apache Tomcat Messaging Database node VM Authentication Data mgmt. Provenance Other DBs Execute causal discovery algorithms Omni-Path Pylon filesystem TCGA fmri Analytics: FGS, IMaGES, and other algorithms, building on TETRAD LSM Node (3TB) Memoryresident datasets ESM Node (12TB) 25

26 De Novo Assembly of the Sumatran Rhino Genome James Denvir and Swanthana Rekulapa, Marshall University First assembled the 1 gigabase Narcissus flycatcher (Ficedula narcissina) genome On the users local resources, they could only assemble ⅓ of the data due to memory limitations, taking 16 hours Assembly of all data on a 3TB node of Bridges required 1.5 TB of memory and only 6.6 hours, which was at least 3-4 faster than anticipated based on execution times elsewhere Narcissus Flycatcher (Ficedula narcissina) in Osaka, Japan, Kuribo, CC BY-SA issus_flycatcher-cropped.jpg Then assembled the 3 gigabase Sumatran rhino (Dicerorhinus sumatrensis) genome Required 1.9 TB of memory and completed in only 11 hours, again 3-4 faster than anticipated Sumatran Rhinoceroses at the Cincinnati Zoo & Botanical Garden, Charles W. Hardin, CC BY ons/3/33/sumatran_rhino_2.jpg 26

27 Improved SNP Detection in Metagenomic Populations Wenxuan Zhong, Xin Xing, and Ping Ma, University of Georgia Assembled 378 Gigabase pairs (Gbp) of gut microbial DNA from normal and diabetic patients Massive metagenome assembly took only 16 hours using an MPI-based metagenome assembler, Ray, on 20 Bridges RM nodes connected by Omni-Path Identified 2480 species-level clusters in assembled sequence data Ran MetaGen clustering software across 10 RM nodes, clustering 500,000 contiguous sequences in only 14 hours Currently testing new likelihood-based statistical method for fast and accurate SNP detection from these metagenomic sequencing data The team is now using Bridges to test a new statistical method on the sequence data to identify critical differences in gut microbes associated with diabetes. Environmental Shotgun Sequencing (ESS). (A) Sampling from habitat; (B) filtering particles, typically by size; (C) DNA extraction and lysis; (D) cloning and library; (E) sequence the clones; (F) sequence assembly. By John C. Wooley, Adam Godzik, Iddo Friedberg - bi , CC BY 2.5, 27

28 Characterizing Diverse Microbial Ecosystems From Terabase-Scale Metagenomic Data Brian Couger, Oklahoma State University Assembled 11 metagenomes sampled from diverse sources, comprising over 3 trillion bases of sequence data Including a recent massive assembly of 1.6 Tbp of metagenomic data from an oil sands tailings pond, a bioremediation target Excellent performance of MPI-based Ray assembler on 90 RM nodes completed assembly in only 4.25 days Analysis of assembled data in progress to characterize organisms present in these diverse environments and identify new microbial phyla Oil sands tailings pond. By NASA Earth Observatory Public Domain, 28

29 Investigating Economic Impacts of Images and Natural Language in E-commerce Dokyun Lee, CMU Tepper School of Business Security and uncertain quality create challenges for sharing economies Lee et al. studied the impact of highquality, verified photos for Airbnb hosts 17,000 properties over 4 months Used Bridges GPU nodes Security and uncertain quality create challenges for sharing economies Difference-in-Difference (DD) analysis showed that on average, rooms with verified photos are booked 9% more often Separating effects of photo verification from photo quality and room reviews indicates that high photo quality results in $2,455 of additional yearly earnings They found asymmetric spillover effects: on the neighborhood level, there appears to be higher overall demand if more rooms have verified photos 29

30 Connectionist Temporal Classification Florian Metze, Hari Parthasarathi, Yajie Miao, and Dong Yu, Carnegie Mellon University A revived approach to sequence classification that works well with speech at the phoneme, syllable, character or word level Bridges is being used to train the Eesen Toolkit part of the Speech Recognition Virtual Kitchen, a webbased resource that allows users to check out VMs with complete deep neural network (DNN) environments. 30

31 Thermal Hydraulics of Next-Generation Gas-Cooled Nuclear Reactors PI Mark Kimber, Texas A&M University Completed 3 Large Eddy Simulations of turbulent thermal mixing of helium coolant in the lower plenum of the Gen IV High- Temperature Gas-cooled Reactor (HTGR) Performed using OpenFOAM, a open-source CFD code Simulations are for scaled-down versions of representative sections of the HTGR lower plenum, each containing 7 support posts and 6 high-reynolds number jets in a cross-flow Each simulation involves a high-quality block-structured mesh with 16+ million cells, and was run on 20 regular Bridges nodes (560 cores) for ~10 million time steps (~1 second of simulated time) Helps to understand the turbulent thermal mixing in great detail, as well as temperature distributions and hotspots on support posts Anirban Jana (PSC) is co-pi and computational lead for this DOE project Distribution of vorticity in a representative unit cell of the HTGR lower plenum containing 7 support posts and 6 jets in a crossflow (right to left). Turbulent mixing of core coolant jets in the lower plenum can cause temperature fluctuations and thermal fatigue of the support posts. Acknowledgements: DOE-NEUP (Grant nos. DE- NE ) NSF-XSEDE (Grant no. CTS160002) 31

32 Mechanisms of Chromosomal Rearrangements Jose Ranz and Edwin Solares, UC Irvine Assembled Drosophila willistroni genome four ways with various assemblers Mix of large memory and regular memory nodes on Bridges enables testing various assemblers and sequencing technologies to obtain the highest-quality genome High-quality assemblies are critical to understanding chromosomal rearrangements, which are pivotal in natural adaptation Bridges high-core, high-memory nodes saved the researchers at least a month compared to other resources Assembled Anopheles arabiensis genome andtranscriptomes Again, using multiple sequencing data types and assembly software Flexibility of Bridges key to supporting diverse needs of this project Drosophila willistoni Adult Male. By Mrs. Sarah L. Martin - Plate XVI, J.T. Patterson and G.B. Mainland. The Drosophilidae of Mexico. University of Texas Publications, 4445:9-101, 1944., Public Domain, = Anopheles gambiae mosquito By James D. Gathany - The Public Health Image Library, ID#444, Public Domain, =

33 Simulations of Nanodisc Systems Jifey Qi and Wonpil Im, Lehigh University Using NAMD on Bridges NVIDIA P100 GPU nodes to simulate a number of nanodisc systems to be used as examples for the Nanodisc Builder module in CHARMM-GUI For a system size of 263,888 atoms, the simulation speed on one P100 node (6.25 ns/day) is faster than on three dual-cpu nodes (4.09 ns/day) CHARMM-GUI provides a web-based graphical user interface to generate various molecular simulation systems and input files (for CHARMM, NAMD, GROMACS, AMBER, GENESIS, LAMMPS, Desmond, OpenMM, and CHARMM/OpenMM) to facilitate and standardize the usage of common and advanced simulation techniques A nanodisc system showing two helical proteins warping around a discoidal lipid bilayer. 33

34 Integration of Membrane Proteins into the Cell Membrane Michiel Niesen and Thomas Miller, Caltech Calculated the integration efficiencies for protein sequences by means of their own coarse-grained (CG) model, running ensembles of ~ simulations in parallel As of 9/13/16, they ve calculated membrane protein integration efficiencies for 16 proteins on Bridges The calculated integration efficiencies correlate well with experimentally-observed expression levels for a series of three homologs of the membrane protein TatC Schematic representation of integration. Ribosome (violet), nascent loops (cyan), nascent transmembrane domains (red), lateral gate (green), lipids (light gray). Experimentally observed expression levels for a series of three homologs of the membrane protein TatC (left) correlate well with the calculated integration efficiency (right). 34

35 For Additional Information Website: Bridges PI: Nick Nystrom Co-PIs: Michael J. Levine Ralph Roskies J Ray Scott Project Manager: Robin Scibek 35

36 Thank you. Questions? 36

37 37 Additional Content

38 Interactions of Outer Membrane Proteins and Lipids Wonpil Im, Lehigh University The outer membrane (OM) of Gram-negative bacteria separates the periplasm from the external environment and functions as a selective barrier that prevents the entry of toxic molecules such as antibiotics and bile salts into the bacteria, which is crucial for survival of bacteria in diverse and hostile environments, but causes significant threats to the public health. The OM is an asymmetric bilayer composed of phospholipids in the periplasmic leaflet and lipopolysaccharides (LPS) in the external leaflet, along with β-barrel OM proteins (OMPs) and lipidated periplasmic lipoproteins. The picture shows a crowding OM model constructed using the coarse-grained Martini force field. The OMPs covers 70% of the membrane with the top leaflet lipid being LPS and the bottom leaflet lipid being POPE, POPG, and Cardiolipin. Simulations are underway using NAMD on Bridges to elucidate interesting behaviors of OMPs and lipids, such as protein clustering and anomalous lipid diffusion in a crowding environment. The system consists of ~1.2 million atoms with a box size of 991Å 991Å 135Å, and thus requires the computer power of Bridges to reach a reasonable sampling. Coarse-grained simulations of crowding bacterial outer membranes. The system consists of 114 OMPF trimer (~4.7%, in blue), 282 OMPA (~11.6%, in red), 2034 LPS (~83.7%) in the top layer, and 3629 POPE (~75.1%), 963 POPG (~19.9%), and 240 Cardiolipin (~5.0%) in the bottom layer. O-antigens and other lipid atoms are white. 38

39 Molecular Dynamics Simulation of YidC Protein J Hettige, K Immadisetty, and M Moradi, University of Arkansas First simulated the atomic model of protein insertase YidC (embedded in an explicit membrane) for 200ns. On the users local resources, these simulations would have taken ~66 days. On Bridges, these simulations only took ~10 days using 4 regular shared memory nodes. Then repeated the simulations after adding a loop, which was missing in the original crystal structure. The fast simulations allowed for repeating the simulations in a feasible time with a more complete model of YidC. The comparison of the two sets of simulations provided evidence for the hypothesis that the C2 loop stabilizes the protein, particularly in the C1 region, which is known to play an important role in protein insertion mechanism. C1 region RMSF (Å) Membrane-embedded YidC protein YidC without C2 with C2 C1 C Residue lipids missing C2 loop Root mean square fluctuation (RMSF) of different residues of YidC protein obtained from 200-ns long simulations show the missing C2 loop can stabilize the functionally important C1 region of YidC protein. 39

40 Theoretical Studies on Physical Chemical Properties Yi-Gui Wang, Southern Connecticut State University Investigated large systems with very detailed basis sets MP2/ G(2d,p)// B3LYP/6-31+G* Locally, they could only run at HF/3-21G, which is inadequate Thoroughly evaluated the N- selective reaction with Marcus theory and O-selective reaction with association effect They report that Bridges speed allows them to address the large number of intermediates and to run the many jobs that are needed 40

A Big Big Data Platform

A Big Big Data Platform A Big Big Data Platform John Urbanic, Parallel Computing Scientist 2017 Pittsburgh Supercomputing Center The Shift to Big Data New Emphases Pan-STARRS telescope http://pan-starrs.ifa.hawaii.edu/public/

More information

Building Bridges: A System for New HPC Communities

Building Bridges: A System for New HPC Communities Building Bridges: A System for New HPC Communities HPC User Forum 59 LRZ, Garching October 16, 2015 Presenter: Jim Kasdorf Director, Special Projects Pittsburgh Supercomputing Center kasdorf@psc.edu 2015

More information

A Converged HPC & Big Data System for Nontraditional and HPC Research

A Converged HPC & Big Data System for Nontraditional and HPC Research A Converged HPC & Big Data System for Nontraditional and HPC Research Nick Nystrom Sr. Director of Research nystrom@psc.edu Overview March 15, 2017 2017 Pittsburgh Supercomputing Center 2 Bridges, a new

More information

The Data exacell DXC. J. Ray Scott DXC PI May 17, 2016

The Data exacell DXC. J. Ray Scott DXC PI May 17, 2016 The Data exacell DXC J. Ray Scott DXC PI May 17, 2016 DXC Leadership Mike Levine Co-Scientific Director Co-PI Nick Nystrom Senior Director of Research Co-PI Ralph Roskies Co-Scientific Director Co-PI Robin

More information

A Big Big Data Platform

A Big Big Data Platform A Big Big Data Platform John Urbanic, Parallel Computing Scientist 2017 Pittsburgh Supercomputing Center The Shift to Big Data New Emphases Pan-STARRS telescope http://pan-starrs.ifa.hawaii.edu/public/

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Our Environment Today Your laptops or workstations: only used for portal access Bridges

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2016 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop Welcome to the XSEDE Big Data Workshop John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Who are we? Our satellite sites: Tufts University University of Utah Purdue

More information

The Data Exacell (DXC): Data Infrastructure Building Blocks for Integrating Analytics with Data Management

The Data Exacell (DXC): Data Infrastructure Building Blocks for Integrating Analytics with Data Management The Data Exacell (DXC): Data Infrastructure Building Blocks for Integrating Analytics with Data Management Nick Nystrom, Michael J. Levine, Ralph Roskies, and J Ray Scott Pittsburgh Supercomputing Center

More information

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop Welcome to the XSEDE Big Data Workshop John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Who are we? Your hosts: Pittsburgh Supercomputing Center Our satellite sites:

More information

Welcome to the XSEDE Big Data Workshop

Welcome to the XSEDE Big Data Workshop Welcome to the XSEDE Big Data Workshop John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Who are we? Our satellite sites: Tufts University Purdue University Howard

More information

Organizational Update: December 2015

Organizational Update: December 2015 Organizational Update: December 2015 David Hudak Doug Johnson Alan Chalker www.osc.edu Slide 1 OSC Organizational Update Leadership changes State of OSC Roadmap Web app demonstration (if time) Slide 2

More information

HPC Capabilities at Research Intensive Universities

HPC Capabilities at Research Intensive Universities HPC Capabilities at Research Intensive Universities Purushotham (Puri) V. Bangalore Department of Computer and Information Sciences and UAB IT Research Computing UAB HPC Resources 24 nodes (192 cores)

More information

Comet Virtualization Code & Design Sprint

Comet Virtualization Code & Design Sprint Comet Virtualization Code & Design Sprint SDSC September 23-24 Rick Wagner San Diego Supercomputer Center Meeting Goals Build personal connections between the IU and SDSC members of the Comet team working

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

The Stampede is Coming: A New Petascale Resource for the Open Science Community

The Stampede is Coming: A New Petascale Resource for the Open Science Community The Stampede is Coming: A New Petascale Resource for the Open Science Community Jay Boisseau Texas Advanced Computing Center boisseau@tacc.utexas.edu Stampede: Solicitation US National Science Foundation

More information

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science ECSS Symposium, 12/16/14 M. L. Norman, R. L. Moore, D. Baxter, G. Fox (Indiana U), A Majumdar, P Papadopoulos, W Pfeiffer, R. S.

More information

Advanced Research Compu2ng Informa2on Technology Virginia Tech

Advanced Research Compu2ng Informa2on Technology Virginia Tech Advanced Research Compu2ng Informa2on Technology Virginia Tech www.arc.vt.edu Personnel Associate VP for Research Compu6ng: Terry Herdman (herd88@vt.edu) Director, HPC: Vijay Agarwala (vijaykag@vt.edu)

More information

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services WVU RESEARCH COMPUTING INTRODUCTION Introduction to WVU s Research Computing Services WHO ARE WE? Division of Information Technology Services Funded through WVU Research Corporation Provide centralized

More information

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision

Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision Cisco Unified Computing System Delivering on Cisco's Unified Computing Vision At-A-Glance Unified Computing Realized Today, IT organizations assemble their data center environments from individual components.

More information

World s most advanced data center accelerator for PCIe-based servers

World s most advanced data center accelerator for PCIe-based servers NVIDIA TESLA P100 GPU ACCELERATOR World s most advanced data center accelerator for PCIe-based servers HPC data centers need to support the ever-growing demands of scientists and researchers while staying

More information

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber NERSC Site Update National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory Richard Gerber NERSC Senior Science Advisor High Performance Computing Department Head Cori

More information

Hyper Converged Systems 250 and 380

Hyper Converged Systems 250 and 380 Hyper Converged Systems 250 and 380 Martin Brandstetter Information Systems Architect Month day, year Transform to a hybrid infrastructure Accelerate the delivery of apps and services to your enterprise

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING Table of Contents: The Accelerated Data Center Optimizing Data Center Productivity Same Throughput with Fewer Server Nodes

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2018 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

SUPERMICRO, VEXATA AND INTEL ENABLING NEW LEVELS PERFORMANCE AND EFFICIENCY FOR REAL-TIME DATA ANALYTICS FOR SQL DATA WAREHOUSE DEPLOYMENTS

SUPERMICRO, VEXATA AND INTEL ENABLING NEW LEVELS PERFORMANCE AND EFFICIENCY FOR REAL-TIME DATA ANALYTICS FOR SQL DATA WAREHOUSE DEPLOYMENTS TABLE OF CONTENTS 2 THE AGE OF INFORMATION ACCELERATION Vexata Provides the Missing Piece in The Information Acceleration Puzzle The Vexata - Supermicro Partnership 4 CREATING ULTRA HIGH-PERFORMANCE DATA

More information

unleashed the future Intel Xeon Scalable Processors for High Performance Computing Alexey Belogortsev Field Application Engineer

unleashed the future Intel Xeon Scalable Processors for High Performance Computing Alexey Belogortsev Field Application Engineer the future unleashed Alexey Belogortsev Field Application Engineer Intel Xeon Scalable Processors for High Performance Computing Growing Challenges in System Architecture The Walls System Bottlenecks Divergent

More information

Predicting Service Outage Using Machine Learning Techniques. HPE Innovation Center

Predicting Service Outage Using Machine Learning Techniques. HPE Innovation Center Predicting Service Outage Using Machine Learning Techniques HPE Innovation Center HPE Innovation Center - Our AI Expertise Sense Learn Comprehend Act Computer Vision Machine Learning Natural Language Processing

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016

RECENT TRENDS IN GPU ARCHITECTURES. Perspectives of GPU computing in Science, 26 th Sept 2016 RECENT TRENDS IN GPU ARCHITECTURES Perspectives of GPU computing in Science, 26 th Sept 2016 NVIDIA THE AI COMPUTING COMPANY GPU Computing Computer Graphics Artificial Intelligence 2 NVIDIA POWERS WORLD

More information

Building NVLink for Developers

Building NVLink for Developers Building NVLink for Developers Unleashing programmatic, architectural and performance capabilities for accelerated computing Why NVLink TM? Simpler, Better and Faster Simplified Programming No specialized

More information

irods at TACC: Secure Infrastructure for Open Science Chris Jordan

irods at TACC: Secure Infrastructure for Open Science Chris Jordan irods at TACC: Secure Infrastructure for Open Science Chris Jordan What is TACC? Texas Advanced Computing Center Cyberinfrastructure Resources for Open Science University of Texas System 9 Academic, 6

More information

NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI

NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI NVIDIA DGX SYSTEMS PURPOSE-BUILT FOR AI Overview Unparalleled Value Product Portfolio Software Platform From Desk to Data Center to Cloud Summary AI researchers depend on computing performance to gain

More information

GROMACS (GPU) Performance Benchmark and Profiling. February 2016

GROMACS (GPU) Performance Benchmark and Profiling. February 2016 GROMACS (GPU) Performance Benchmark and Profiling February 2016 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Dell, Mellanox, NVIDIA Compute

More information

HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA

HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA HETEROGENEOUS HPC, ARCHITECTURAL OPTIMIZATION, AND NVLINK STEVE OBERLIN CTO, TESLA ACCELERATED COMPUTING NVIDIA STATE OF THE ART 2012 18,688 Tesla K20X GPUs 27 PetaFLOPS FLAGSHIP SCIENTIFIC APPLICATIONS

More information

TESLA P100 PERFORMANCE GUIDE. HPC and Deep Learning Applications

TESLA P100 PERFORMANCE GUIDE. HPC and Deep Learning Applications TESLA P PERFORMANCE GUIDE HPC and Deep Learning Applications MAY 217 TESLA P PERFORMANCE GUIDE Modern high performance computing (HPC) data centers are key to solving some of the world s most important

More information

Un8l now, milestones in ar8ficial intelligence (AI) have focused on situa8ons where informa8on is complete, such as chess and Go.

Un8l now, milestones in ar8ficial intelligence (AI) have focused on situa8ons where informa8on is complete, such as chess and Go. BeOng on AI: Sergiu Sanielevici sergiu@psc.edu The 1st Interna8onal Workshop on the US and China Collabora8on in Experience and Best Prac8ce in Supercompu8ng PEARC17 New Orleans July 10, 2017 Un8l now,

More information

Emerging Technologies for HPC Storage

Emerging Technologies for HPC Storage Emerging Technologies for HPC Storage Dr. Wolfgang Mertz CTO EMEA Unstructured Data Solutions June 2018 The very definition of HPC is expanding Blazing Fast Speed Accessibility and flexibility 2 Traditional

More information

Picasso Panel Thinking Beyond 5 G David Corman

Picasso Panel Thinking Beyond 5 G David Corman Picasso Panel Thinking Beyond 5 G David Corman Program Director Directorate for Computer and Information Science and Engineering National Science Foundation June 19, 2018 Some Motivation: Toward Smart

More information

Our Workshop Environment

Our Workshop Environment Our Workshop Environment John Urbanic Parallel Computing Scientist Pittsburgh Supercomputing Center Copyright 2017 Our Environment This Week Your laptops or workstations: only used for portal access Bridges

More information

ABySS Performance Benchmark and Profiling. May 2010

ABySS Performance Benchmark and Profiling. May 2010 ABySS Performance Benchmark and Profiling May 2010 Note The following research was performed under the HPC Advisory Council activities Participating vendors: AMD, Dell, Mellanox Compute resource - HPC

More information

PI SERVER 2012 Do. More. Faster. Now! Copyri g h t 2012 OSIso f t, LLC.

PI SERVER 2012 Do. More. Faster. Now! Copyri g h t 2012 OSIso f t, LLC. PI SERVER 2012 Do. More. Faster. Now! Copyri g h t 2012 OSIso f t, LLC. AUGUST 7, 2007 APRIL 14, 2010 APRIL 24, 2012 Copyri g h t 2012 OSIso f t, LLC. 2 PI SERVER 2010 PERFORMANCE 2010 R3 Max Point Count

More information

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development Jeremy Fischer Indiana University 9 September 2014 Citation: Fischer, J.L. 2014. ACCI Recommendations on Long Term

More information

HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS

HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS HIGH PERFORMANCE COMPUTING (PLATFORMS) SECURITY AND OPERATIONS AT PITT Kim F. Wong Center for Research Computing SAC-PA, June 22, 2017 Our service The mission of the Center for Research Computing is to

More information

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet

Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet WHITE PAPER Accelerating Hadoop Applications with the MapR Distribution Using Flash Storage and High-Speed Ethernet Contents Background... 2 The MapR Distribution... 2 Mellanox Ethernet Solution... 3 Test

More information

NAMD Performance Benchmark and Profiling. January 2015

NAMD Performance Benchmark and Profiling. January 2015 NAMD Performance Benchmark and Profiling January 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox Compute resource

More information

Cornell Red Cloud: Campus-based Hybrid Cloud. Steven Lee Cornell University Center for Advanced Computing

Cornell Red Cloud: Campus-based Hybrid Cloud. Steven Lee Cornell University Center for Advanced Computing Cornell Red Cloud: Campus-based Hybrid Cloud Steven Lee Cornell University Center for Advanced Computing shl1@cornell.edu Cornell Center for Advanced Computing (CAC) Profile CAC mission, impact on research

More information

Architectures for Scalable Media Object Search

Architectures for Scalable Media Object Search Architectures for Scalable Media Object Search Dennis Sng Deputy Director & Principal Scientist NVIDIA GPU Technology Workshop 10 July 2014 ROSE LAB OVERVIEW 2 Large Database of Media Objects Next- Generation

More information

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science

CSD3 The Cambridge Service for Data Driven Discovery. A New National HPC Service for Data Intensive science CSD3 The Cambridge Service for Data Driven Discovery A New National HPC Service for Data Intensive science Dr Paul Calleja Director of Research Computing University of Cambridge Problem statement Today

More information

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010 Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010 Windows HPC Server 2008 R2 Windows HPC Server 2008 R2 makes supercomputing

More information

Gen-Z Memory-Driven Computing

Gen-Z Memory-Driven Computing Gen-Z Memory-Driven Computing Our vision for the future of computing Patrick Demichel Distinguished Technologist Explosive growth of data More Data Need answers FAST! Value of Analyzed Data 2005 0.1ZB

More information

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2017

Interconnect Your Future Enabling the Best Datacenter Return on Investment. TOP500 Supercomputers, November 2017 Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2017 InfiniBand Accelerates Majority of New Systems on TOP500 InfiniBand connects 77% of new HPC

More information

HPC Innovation Lab Update. Dell EMC HPC Community Meeting 3/28/2017

HPC Innovation Lab Update. Dell EMC HPC Community Meeting 3/28/2017 HPC Innovation Lab Update Dell EMC HPC Community Meeting 3/28/2017 Dell EMC HPC Innovation Lab charter Design, develop and integrate Heading HPC systems Lorem ipsum Flexible reference dolor sit amet, architectures

More information

Big Data 2015: Sponsor and Participants Research Event ""

Big Data 2015: Sponsor and Participants Research Event Big Data 2015: Sponsor and Participants Research Event "" Center for Large-scale Data Systems Research, CLDS! San Diego Supercomputer Center! UC San Diego! Agenda" Welcome and introductions! SDSC: Who

More information

The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research

The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research The Cambridge Bio-Medical-Cloud An OpenStack platform for medical analytics and biomedical research Dr Paul Calleja Director of Research Computing University of Cambridge Global leader in science & technology

More information

Leveraging Software-Defined Storage to Meet Today and Tomorrow s Infrastructure Demands

Leveraging Software-Defined Storage to Meet Today and Tomorrow s Infrastructure Demands Leveraging Software-Defined Storage to Meet Today and Tomorrow s Infrastructure Demands Unleash Your Data Center s Hidden Power September 16, 2014 Molly Rector CMO, EVP Product Management & WW Marketing

More information

SUPERCHARGE DEEP LEARNING WITH DGX-1. Markus Weber SC16 - November 2016

SUPERCHARGE DEEP LEARNING WITH DGX-1. Markus Weber SC16 - November 2016 SUPERCHARGE DEEP LEARNING WITH DGX-1 Markus Weber SC16 - November 2016 NVIDIA Pioneered GPU Computing Founded 1993 $7B 9,500 Employees 100M NVIDIA GeForce Gamers The world s largest gaming platform Pioneering

More information

IBM CORAL HPC System Solution

IBM CORAL HPC System Solution IBM CORAL HPC System Solution HPC and HPDA towards Cognitive, AI and Deep Learning Deep Learning AI / Deep Learning Strategy for Power Power AI Platform High Performance Data Analytics Big Data Strategy

More information

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015 LAMMPS-KOKKOS Performance Benchmark and Profiling September 2015 2 Note The following research was performed under the HPC Advisory Council activities Participating vendors: Intel, Dell, Mellanox, NVIDIA

More information

LBRN - HPC systems : CCT, LSU

LBRN - HPC systems : CCT, LSU LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit

More information

Top 5 Reasons to Consider

Top 5 Reasons to Consider Top 5 Reasons to Consider NVM Express over Fabrics For Your Cloud Data Center White Paper Top 5 Reasons to Consider NVM Express over Fabrics For Your Cloud Data Center Major transformations are occurring

More information

NVIDIA Update and Directions on GPU Acceleration for Earth System Models

NVIDIA Update and Directions on GPU Acceleration for Earth System Models NVIDIA Update and Directions on GPU Acceleration for Earth System Models Stan Posey, HPC Program Manager, ESM and CFD, NVIDIA, Santa Clara, CA, USA Carl Ponder, PhD, Applications Software Engineer, NVIDIA,

More information

A Breakthrough in Non-Volatile Memory Technology FUJITSU LIMITED

A Breakthrough in Non-Volatile Memory Technology FUJITSU LIMITED A Breakthrough in Non-Volatile Memory Technology & 0 2018 FUJITSU LIMITED IT needs to accelerate time-to-market Situation: End users and applications need instant access to data to progress faster and

More information

OpenPOWER Performance

OpenPOWER Performance OpenPOWER Performance Alex Mericas Chief Engineer, OpenPOWER Performance IBM Delivering the Linux ecosystem for Power SOLUTIONS OpenPOWER IBM SOFTWARE LINUX ECOSYSTEM OPEN SOURCE Solutions with full stack

More information

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions Pradeep Sivakumar pradeep-sivakumar@northwestern.edu Contents What is XSEDE? Introduction Who uses XSEDE?

More information

Protect Your Data At Every Point Possible. Reduce risk while controlling costs with Dell EMC and Intel #1 in data protection 1

Protect Your Data At Every Point Possible. Reduce risk while controlling costs with Dell EMC and Intel #1 in data protection 1 Protect Your Data At Every Point Possible Reduce risk while controlling costs with Dell EMC and Intel #1 in data protection 1 Transform IT to protect your data in the digital era As the growth, mobility

More information

IBM Power Systems Update. David Spurway IBM Power Systems Product Manager STG, UK and Ireland

IBM Power Systems Update. David Spurway IBM Power Systems Product Manager STG, UK and Ireland IBM Power Systems Update David Spurway IBM Power Systems Product Manager STG, UK and Ireland Would you like to go fast? Go faster - win your race Doing More LESS With Power 8 POWER8 is the fastest around

More information

MOHA: Many-Task Computing Framework on Hadoop

MOHA: Many-Task Computing Framework on Hadoop Apache: Big Data North America 2017 @ Miami MOHA: Many-Task Computing Framework on Hadoop Soonwook Hwang Korea Institute of Science and Technology Information May 18, 2017 Table of Contents Introduction

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

TESLA V100 PERFORMANCE GUIDE. Life Sciences Applications

TESLA V100 PERFORMANCE GUIDE. Life Sciences Applications TESLA V100 PERFORMANCE GUIDE Life Sciences Applications NOVEMBER 2017 TESLA V100 PERFORMANCE GUIDE Modern high performance computing (HPC) data centers are key to solving some of the world s most important

More information

The BioHPC Nucleus Cluster & Future Developments

The BioHPC Nucleus Cluster & Future Developments 1 The BioHPC Nucleus Cluster & Future Developments Overview Today we ll talk about the BioHPC Nucleus HPC cluster with some technical details for those interested! How is it designed? What hardware does

More information

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale

UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale In Collaboration with Intel UCS M-Series + Citrix XenApp Optimizing high density XenApp deployment at Scale Aniket Patankar UCS Product Manager May 2015 Cisco UCS - Powering Applications at Every Scale

More information

Please give me your feedback

Please give me your feedback #HPEDiscover Please give me your feedback Session ID: B4385 Speaker: Aaron Spurlock Use the mobile app to complete a session survey 1. Access My schedule 2. Click on the session detail page 3. Scroll down

More information

Jetstream: Adding Cloud-based Computing to the National Cyberinfrastructure

Jetstream: Adding Cloud-based Computing to the National Cyberinfrastructure Jetstream: Adding Cloud-based Computing to the National Cyberinfrastructure funded by the National Science Foundation Award #ACI-1445604 Matthew Vaughn(@mattdotvaughn) ORCID 0000-0002-1384-4283 Director,

More information

HPE ProLiant DL360 Gen P 16GB-R P408i-a 8SFF 500W PS Performance Server (P06453-B21)

HPE ProLiant DL360 Gen P 16GB-R P408i-a 8SFF 500W PS Performance Server (P06453-B21) Digital data sheet HPE ProLiant DL360 Gen10 4110 1P 16GB-R P408i-a 8SFF 500W PS Performance Server (P06453-B21) ProLiant DL Servers What's new Innovative design with greater flexibility to mix and match

More information

GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations

GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations GPFS Experiences from the Argonne Leadership Computing Facility (ALCF) William (Bill) E. Allcock ALCF Director of Operations Argonne National Laboratory Argonne National Laboratory is located on 1,500

More information

SPARC 2 Consultations January-February 2016

SPARC 2 Consultations January-February 2016 SPARC 2 Consultations January-February 2016 1 Outline Introduction to Compute Canada SPARC 2 Consultation Context Capital Deployment Plan Services Plan Access and Allocation Policies (RAC, etc.) Discussion

More information

Dell EMC All-Flash solutions are powered by Intel Xeon processors. Learn more at DellEMC.com/All-Flash

Dell EMC All-Flash solutions are powered by Intel Xeon processors. Learn more at DellEMC.com/All-Flash N O I T A M R O F S N A R T T I L H E S FU FLA A IN Dell EMC All-Flash solutions are powered by Intel Xeon processors. MODERNIZE WITHOUT COMPROMISE I n today s lightning-fast digital world, your IT Transformation

More information

The HPE Living Progress Challenge

The HPE Living Progress Challenge December 15, 2015 The HPE Living Progress Challenge Overview Chris Wellise Director, Strategic Initiatives The power of digital inclusion The potential for technology to break down barriers is limitless,

More information

Current Status of the Next- Generation Supercomputer in Japan. YOKOKAWA, Mitsuo Next-Generation Supercomputer R&D Center RIKEN

Current Status of the Next- Generation Supercomputer in Japan. YOKOKAWA, Mitsuo Next-Generation Supercomputer R&D Center RIKEN Current Status of the Next- Generation Supercomputer in Japan YOKOKAWA, Mitsuo Next-Generation Supercomputer R&D Center RIKEN International Workshop on Peta-Scale Computing Programming Environment, Languages

More information

Oracle Big Data Connectors

Oracle Big Data Connectors Oracle Big Data Connectors Oracle Big Data Connectors is a software suite that integrates processing in Apache Hadoop distributions with operations in Oracle Database. It enables the use of Hadoop to process

More information

IBM Power Systems HPC Cluster

IBM Power Systems HPC Cluster IBM Power Systems HPC Cluster Highlights Complete and fully Integrated HPC cluster for demanding workloads Modular and Extensible: match components & configurations to meet demands Integrated: racked &

More information

Data Movement & Storage Using the Data Capacitor Filesystem

Data Movement & Storage Using the Data Capacitor Filesystem Data Movement & Storage Using the Data Capacitor Filesystem Justin Miller jupmille@indiana.edu http://pti.iu.edu/dc Big Data for Science Workshop July 2010 Challenges for DISC Keynote by Alex Szalay identified

More information

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE Hitoshi Sato *1, Shuichi Ihara *2, Satoshi Matsuoka *1 *1 Tokyo Institute

More information

S8765 Performance Optimization for Deep- Learning on the Latest POWER Systems

S8765 Performance Optimization for Deep- Learning on the Latest POWER Systems S8765 Performance Optimization for Deep- Learning on the Latest POWER Systems Khoa Huynh Senior Technical Staff Member (STSM), IBM Jonathan Samn Software Engineer, IBM Evolving from compute systems to

More information

Introduction to National Supercomputing Centre in Guangzhou and Opportunities for International Collaboration

Introduction to National Supercomputing Centre in Guangzhou and Opportunities for International Collaboration Exascale Applications and Software Conference 21st 23rd April 2015, Edinburgh, UK Introduction to National Supercomputing Centre in Guangzhou and Opportunities for International Collaboration Xue-Feng

More information

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION

FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION FIVE BEST PRACTICES FOR ENSURING A SUCCESSFUL SQL SERVER MIGRATION The process of planning and executing SQL Server migrations can be complex and risk-prone. This is a case where the right approach and

More information

HPE ProLiant ML350 Gen P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01)

HPE ProLiant ML350 Gen P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01) Digital data sheet HPE ProLiant ML350 Gen10 4110 1P 16GB-R E208i-a 8SFF 1x800W RPS Solution Server (P04674-S01) ProLiant ML Servers What's new Support for Intel Xeon Scalable processors full stack. 2600

More information

RE-IMAGINING THE DATACENTER. Lynn Comp Director of Datacenter Solutions and Technologies

RE-IMAGINING THE DATACENTER. Lynn Comp Director of Datacenter Solutions and Technologies RE-IMAGINING THE DATACENTER Lynn Comp Director of Datacenter Solutions and Technologies IT: Period of Transformation Computer-Centric Network-Centric Human-Centric Focused on Productivity through automation

More information

Microsoft IT Leverages its Compute Service to Virtualize SharePoint 2010

Microsoft IT Leverages its Compute Service to Virtualize SharePoint 2010 Microsoft IT Leverages its Compute Service to Virtualize SharePoint 2010 Published: June 2011 The following content may no longer reflect Microsoft s current position or infrastructure. This content should

More information

DGX UPDATE. Customer Presentation Deck May 8, 2017

DGX UPDATE. Customer Presentation Deck May 8, 2017 DGX UPDATE Customer Presentation Deck May 8, 2017 NVIDIA DGX-1: The World s Fastest AI Supercomputer FASTEST PATH TO DEEP LEARNING EFFORTLESS PRODUCTIVITY REVOLUTIONARY AI PERFORMANCE Fully-integrated

More information

Game-changing Extreme GPU computing with The Dell PowerEdge C4130

Game-changing Extreme GPU computing with The Dell PowerEdge C4130 Game-changing Extreme GPU computing with The Dell PowerEdge C4130 A Dell Technical White Paper This white paper describes the system architecture and performance characterization of the PowerEdge C4130.

More information

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill Introduction to FREE National Resources for Scientific Computing Dana Brunson Oklahoma State University High Performance Computing Center Jeff Pummill University of Arkansas High Peformance Computing Center

More information

HPE ProLiant ML350 Gen10 Server

HPE ProLiant ML350 Gen10 Server Digital data sheet HPE ProLiant ML350 Gen10 Server ProLiant ML Servers What's new Support for Intel Xeon Scalable processors full stack. 2600 MT/s HPE DDR4 SmartMemory RDIMM/LRDIMM offering 8, 16, 32,

More information

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing Jay Boisseau, Director April 17, 2012 TACC Vision & Strategy Provide the most powerful, capable computing technologies and

More information

Rutgers Discovery Informatics Institute (RDI2)

Rutgers Discovery Informatics Institute (RDI2) Rutgers Discovery Informatics Institute (RDI2) Manish Parashar h+p://rdi2.rutgers.edu Modern Science & Society Transformed by Compute & Data The era of Extreme Compute and Big Data New paradigms and prac3ces

More information

AWS & Intel: A Partnership Dedicated to fueling your Innovations. Thomas Kellerer BDM CSP, Intel Central Europe

AWS & Intel: A Partnership Dedicated to fueling your Innovations. Thomas Kellerer BDM CSP, Intel Central Europe AWS & Intel: A Partnership Dedicated to fueling your Innovations Thomas Kellerer BDM CSP, Intel Central Europe The Digital Service Economy Growth in connected devices enables new business opportunities

More information

GPU ACCELERATED COMPUTING. 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation

GPU ACCELERATED COMPUTING. 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation GPU ACCELERATED COMPUTING 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation GAMING PRO ENTERPRISE VISUALIZATION DATA CENTER AUTO

More information