HPC & BD Survey: Results for participants who use resources outside of NJIT

Similar documents
Illinois Proposal Considerations Greg Bauer

Remote & Collaborative Visualization. Texas Advanced Computing Center

Introduction to High Performance Computing. Shaohao Chen Research Computing Services (RCS) Boston University

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010

High Performance Computing Resources at MSU

Deep learning prevalence. first neuroscience department. Spiking Neuron Operant conditioning First 1 Billion transistor processor

HPC and AI Solution Overview. Garima Kochhar HPC and AI Innovation Lab

HPC Capabilities at Research Intensive Universities

The Use of Cloud Computing Resources in an HPC Environment

The BioHPC Nucleus Cluster & Future Developments

DGX UPDATE. Customer Presentation Deck May 8, 2017

Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011

GPU ACCELERATED COMPUTING. 1 st AlsaCalcul GPU Challenge, 14-Jun-2016, Strasbourg Frédéric Parienté, Tesla Accelerated Computing, NVIDIA Corporation

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

Building an Exotic HPC Ecosystem at The University of Tulsa

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

Amazon Web Services: Performance Analysis of High Performance Computing Applications on the Amazon Web Services Cloud

Python based Data Science on Cray Platforms Rob Vesse, Alex Heye, Mike Ringenburg - Cray Inc C O M P U T E S T O R E A N A L Y Z E

Research e-infrastructures in Czech Republic (e-infra CZ) for scientific computations, collaborative research & research support

HPC Architectures. Types of resource currently in use

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

(software agnostic) Computational Considerations

System Requirements. SuccessMaker 8

Qsync. Cross-device File Sync for Optimal Teamwork. Share your life and work

DECISION SUPPORT SYSTEM USING TENSOR FLOW

ACCRE High Performance Compute Cluster

Intel tools for High Performance Python 데이터분석및기타기능을위한고성능 Python

Hardware and Software. Fei-Fei Li & Justin Johnson & Serena Yeung. Lecture 6-1

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

A performance comparison of Deep Learning frameworks on KNL

SuperMike-II Launch Workshop. System Overview and Allocations

The Why and How of HPC-Cloud Hybrids with OpenStack

Microsoft Office 365 (O365) and Office 2016 ProPlus Download FAQs For Students

GPU Coder: Automatic CUDA and TensorRT code generation from MATLAB

Bridging Neuroscience and HPC with MPI-LiFE Shashank Gugnani

Deep learning in MATLAB From Concept to CUDA Code

8/28/12. CSE 820 Graduate Computer Architecture. Richard Enbody. Dr. Enbody. 1 st Day 2

Embarquez votre Intelligence Artificielle (IA) sur CPU, GPU et FPGA

Intro to Software as a Service (SaaS) and Cloud Computing

Deep Learning Frameworks with Spark and GPUs

The rcuda technology: an inexpensive way to improve the performance of GPU-based clusters Federico Silla

TESLA V100 PERFORMANCE GUIDE. Life Sciences Applications

Singularity for GPU and Deep Learning

HOKUSAI System. Figure 0-1 System diagram

WITH INTEL TECHNOLOGIES

Supercomputer and grid infrastructure! in Poland!

Erkenntnisse aus aktuellen Performance- Messungen mit LS-DYNA

Fast Hardware For AI

ANSYS Improvements to Engineering Productivity with HPC and GPU-Accelerated Simulation

Deep Learning: Transforming Engineering and Science The MathWorks, Inc.

HPC learning using Cloud infrastructure

Agenda. Qsync usage scenarios and sample applications. QNAP NAS specifications recommended by various types of users.

TECHNICAL OVERVIEW ACCELERATED COMPUTING AND THE DEMOCRATIZATION OF SUPERCOMPUTING

CALMIP : HIGH PERFORMANCE COMPUTING

GPUs and the Future of Accelerated Computing Emerging Technology Conference 2014 University of Manchester

Accelerating Insights In the Technical Computing Transformation

Parallel Visualization At TACC. Greg Abram

Characterization and Benchmarking of Deep Learning. Natalia Vassilieva, PhD Sr. Research Manager

Optimized Scientific Computing:

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)

IN11E: Architecture and Integration Testbed for Earth/Space Science Cyberinfrastructures

DGX SYSTEMS: DEEP LEARNING FROM DESK TO DATA CENTER. Markus Weber and Haiduong Vo

Guillimin HPC Users Meeting December 14, 2017

VSC Users Day 2018 Start to GPU Ehsan Moravveji

AN INTRODUCTION TO CLUSTER COMPUTING

Machine Learning In A Snap. Thomas Parnell Research Staff Member IBM Research - Zurich

OPTIMIZED GPU KERNELS FOR DEEP LEARNING. Amir Khosrowshahi

LBRN - HPC systems : CCT, LSU

Intel Distribution for Python* и Intel Performance Libraries

GPU Clouds IGT Cloud Computing Summit Mordechai Butrashvily, CEO 2009 (c) All rights reserved

Parallel Visualization At TACC. Greg Abram

HPC Middle East. KFUPM HPC Workshop April Mohamed Mekias HPC Solutions Consultant. Agenda

How To Design a Cluster

My operating system is old but I don't care : I'm using NIX! B.Bzeznik BUX meeting, Vilnius 22/03/2016

Smarter Clusters from the Supercomputer Experts

Big Data Systems on Future Hardware. Bingsheng He NUS Computing

Feedback on BeeGFS. A Parallel File System for High Performance Computing

Dealing with Data Especially Big Data

System Requirements. SuccessMaker 7

Real Parallel Computers

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

The Center for High Performance Computing. Dell Breakfast Events 20 th June 2016 Happy Sithole

Part 2: Computing and Networking Capacity (for research and instructional activities)

Review: The best frameworks for machine learning and deep learning

High Performance Computing (HPC) Using zcluster at GACRC

Introduction CPS343. Spring Parallel and High Performance Computing. CPS343 (Parallel and HPC) Introduction Spring / 29

SOE Computing Support

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Office 365 provided by Hugh Baird College

HIGH PERFORMANCE COMPUTING FROM SUN

Making use of and accounting for GPUs. Daniel Traynor, GridPP39

Dynamic Work Order System (DWOS) Installation Guide

MIGRATE2IAAS CLOUDSCRAPER TM V0.5 USER MANUAL. 16 Feb 2014 Copyright M2IAAS INC.

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

Parallel Computing: From Inexpensive Servers to Supercomputers

Deploying remote GPU virtualization with rcuda. Federico Silla Technical University of Valencia Spain

Regional & National HPC resources available to UCSB

Introduction to HPC Using zcluster at GACRC

ECE 574 Cluster Computing Lecture 1

Transcription:

HPC & BD Survey: Results for participants who use resources outside of NJIT Questions concerning uses of HP/BD resources outside of NJIT were contained in Section 1 of the survey, HPC Hardware Resources. 53 participants (91%) took this section. Of these, 21 reported using outside resources. 11 (52%) of these were faculty. These participants, then, had a proportionately larger faculty representation than the survey as a whole, where faculty comprised 40%. Participants indicated their reasons (as many as they wished) for using outside resources by choosing among provided reasons (Not enough CPU cores; CPU cores too slow; Not enough RAM, For parallel computing, node interconnect speed too slow; Not enough GPU cores; Read/write of temporary files too slow; Storage space inadequate; Required software not available; OTHER). They then described the outside resources they used, and were asked to provide additional comments if they wished. The number of participants citing each reason for using outside resources by the 21 participants who reported using outside resources is shown in the graph below. The specified non-njit resources, along with the reasons for their use, are shown in the subsequent table. Each table row represents responses from a single participant. Hence, resources listed more than once were specified by more than one participant.

REASONS Not enough CPU cores CPU cores too slow Not enough RAM per core For parallel computing, node interconnect speed too slow Not enough GPU cores Read/write of temporary files too slow Storage space inadequate Required software not available OTHER OUTSIDE RESOURCE Xcede supplementary computation Compute Canada, Amazon Webservice. Google cloud platform AWS & GCP GPU station with 4 TitanXp GPUs, purchased by professor lab NFS computer, DOE & Navy computers NASA Pleides supercomputer Resources at NYU TACC CIPRES portal Workstation and other universities with better support Rutgers Conley3 and Amarel

8-core 4.0Ghz, high performance GPU 2GB workstation Stanford HPC system, Barcelona Supercomputin g Center UIUC cluster machine HPC from UPenn unspecified Unspecified, provided by collaborators The table below lists the written descriptions for OTHER and for Additional comments. OUTSIDE RESOURCE supplementary computation AWS & GCP lab NFS computer, DOE & Navy computers NASA Pleides supercomputer Reasons: OTHER some software is just too epensive for the university to consider for one or two users only - so use if can find on nsf or dod computer ADDITIONAL COMMENTS Because I'm facing a problem in which my jobs got killed without any reason but thats sure its due to problem in cluster and when I contact the ARCS, they said they are not able to identify the problem. NJIT HPC computational resources are not adequate for the amount of research we perform in our group. data transfer between node and disk to memory and temp space for bd task is not enough. for data science, we don't have the latest version of lib and also I couldn't find cudnn lib for tensorflow to run the deep learning into GPU. I believe, we may have to improve hpc/bd resources for deep learning for both parallel and serial in terms of lib, driver, development and testing environment Easier access. Response time. For Kong head node, the 1 GB quota is too small

Resources at NYU More eperiments can be run :) Team is made up of NYU and NJIT students/faculty. Therefore, it is easy for us to use both clusters to run our eperiments. TACC Inadequate support Inadequate support and not to up-to-date documentation Stanford HPC system, Barcelona Supercomputing Center UIUC cluster machine I am still completing some work on those machines. GLIBC is too old on stheno I will be using those only until I am done with some old computation. I will switch to NJIT resources very soon, though. For Kong, I was attempting to install miniconda and work on specific project in Python. Unfortunately, the storage for each user is just 5GB and it is totally not enough for a student user who has need in large computation. For stheno, the GLIBC version is way too old (2.5) and it should have up-to-date version for users. Additional observations: Participants who indicated use of outside resources versus participants who did not Requests for new processors 24% of participants who used outside resources epressed an interest in new processors, compared with 50% of participants who did not use outside resources. (The candidate new processors were Intel Core i7 or i9; Google Tensor processing unit; Intel Nervana Neural Network Processor; Intel Xeon Phi; and AMD Epyc.) While both groups favored Intel Core i7 or i9, the outside-resources group also favored Google Tensor processing unit and Intel Xeon Phi compared with the non-outside-resources group, while the non-outside-resources group favored AMD Epyc compared with the outside-resources group. (Note that a participant could list any number of proposed resources. Hence, even though a larger variety of proposed processors were chosen by the outside-resources group, a greater number of nonoutside-resources groups members indicated a wish for any new processors at all.) Requests for currently unavailable software Participants were asked to specify currently unavailable software that they wanted, specify whether the software had associated costs, and indicate the level of anticipated use of the software in their research and teaching. In the table below, software specified by participants who use outside resources are shown in the gray rows. The non-highlighted rows list software specified by participants who do not use outside resources. Note that some items (e.g., Tensorflow) were requested by multiple participants. Each individual request is listed, as participants varied in their proposed usage. NAME of REQUESTED SOFTWARE Associated costs Research use Teaching use Too many options for compiler lead to problems; simplify the list no medium low

cclib python library; can be installed from anaconda by "conda install -c omnia cclib" no high high Quantum Espresso; Open source alternative to Gaussian 16 no high high GAMESS; Open source alternative for Gaussian no high high Polyrate; Software for rate constant calculations; has interfaces for Gaussian, NWChem, GAMESS no high high Open babel; like cclib it's a useful tool to have no high high Tensorflow no high low Scikit Learn (python library) no high low Keras no medium low Nektar++ no high no cudnn. Deep learning for GPU no high high Stata yes high no SAS yes high no Matlab 2017b yes high high Matlab Computer Vision Toolbo yes high high Updated version of Lammps; Working GPU acceleration in Lammps, Lammps with most of the packages preinstalled no high low Updated Anaconda Python 2 (and maybe 3) packages/libraries no high high Tensorflow no medium medium Tensorflow no high medium tensorflow no medium high Keras no high low keras no high medium git latest version no high high

http://liulab.dfci.harvard.edu/macs/download.html (MACS) no high no BETA: http://cistrome.org/beta/ no medium no STAR: https://github.com/aledobin/star no medium no pytorch no high medium