Regional & National HPC resources available to UCSB

Similar documents
HPC Capabilities at Research Intensive Universities

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions

Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011

XSEDE and XSEDE Resources

Overview of XSEDE for HPC Users Victor Hazlewood XSEDE Deputy Director of Operations

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

Data Movement & Storage Using the Data Capacitor Filesystem

Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

XSEDE and XSEDE Resources

TeraGrid TeraGrid and the Path to Petascale

Getting Started with XSEDE. Dan Stanzione

Overview and Introduction to Scientific Visualization. Texas Advanced Computing Center The University of Texas at Austin

LBRN - HPC systems : CCT, LSU

Indiana University s Lustre WAN: The TeraGrid and Beyond

Longhorn Project TACC s XD Visualization Resource

Cluster Network Products

University at Buffalo Center for Computational Research

Part 2: Computing and Networking Capacity (for research and instructional activities)

Cyberinfrastructure!

HIGH PERFORMANCE COMPUTING FROM SUN

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

BlueGene/L. Computer Science, University of Warwick. Source: IBM

Introduction to HPC Resources and Linux

Big Data 2015: Sponsor and Participants Research Event ""

Overview of HPC at LONI

Illinois Proposal Considerations Greg Bauer

Remote & Collaborative Visualization. Texas Advanced Computing Center

An Overview of CSNY, the Cyberinstitute of the State of New York at buffalo

Building Effective CyberGIS: FutureGrid. Marlon Pierce, Geoffrey Fox Indiana University

XSEDE New User Training. Ritu Arora November 14, 2014

Beyond Petascale. Roger Haskin Manager, Parallel File Systems IBM Almaden Research Center

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

SuperMike-II Launch Workshop. System Overview and Allocations

University of California, Riverside. Computing and Communications. Computational UCR. March 28, Introduction 2

Scaling a Global File System to the Greatest Possible Extent, Performance, Capacity, and Number of Users

Readme for Platform Open Cluster Stack (OCS)

Real Parallel Computers

Large Scale Remote Interactive Visualization

Future of Enzo. Michael L. Norman James Bordner LCA/SDSC/UCSD

High Performance Computing and Data Resources at SDSC

HPC 2 Informed by Industry

Ian Foster, CS554: Data-Intensive Computing

Introduction to ARSC. David Newman (from Tom Logan slides), September Monday, September 14, 15

Communication has significant impact on application performance. Interconnection networks therefore have a vital role in cluster systems.

A New NSF TeraGrid Resource for Data-Intensive Science

InfoBrief. Platform ROCKS Enterprise Edition Dell Cluster Software Offering. Key Points

HPC Hardware Overview

Parallel File Systems Compared

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology

The Stampede is Coming: A New Petascale Resource for the Open Science Community

Jetstream: Adding Cloud-based Computing to the National Cyberinfrastructure

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Building Self-Healing Mass Storage Arrays. for Large Cluster Systems

Managing Terascale Systems and Petascale Data Archives

Developing Applications with Networking Capabilities via End-to-End Software Defined Networking (DANCES)

Using Quality of Service for Scheduling on Cray XT Systems

The Center for Computational Research & Grid Computing

Outline. March 5, 2012 CIRMMT - McGill University 2

The Blue Water s File/Archive System. Data Management Challenges Michelle Butler

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

XSEDE New User Tutorial

XSEDE New User Tutorial

XSEDE and XSEDE Resources

irods at TACC: Secure Infrastructure for Open Science Chris Jordan

SMP and ccnuma Multiprocessor Systems. Sharing of Resources in Parallel and Distributed Computing Systems

National Level Computing at UTK. Jim Ferguson NICS Director of Education, Outreach & Training August 19, 2011

Shaking-and-Baking on a Grid

Parallel Visualization At TACC. Greg Abram

Private Cloud at IIT Delhi

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Jetstream Overview A national research and education cloud

AN INTRODUCTION TO CLUSTER COMPUTING

Parallel Visualization At TACC. Greg Abram

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

Day 9: Introduction to CHTC

Comet Virtualization Code & Design Sprint

Ioan Raicu. Everyone else. More information at: Background? What do you want to get out of this course?

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Parallel & Cluster Computing. cs 6260 professor: elise de doncker by: lina hussein

High Performance Computing at Mississippi State University

XSEDE New User Tutorial

ACCRE High Performance Compute Cluster

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

High-Performance Computing, Computational and Data Grids

Advanced Research Compu2ng Informa2on Technology Virginia Tech

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research

HPC Architectures. Types of resource currently in use

DocuShare 6.6 Customer Expectation Setting

OpenSees on Teragrid

UGP and the UC Grid Portals

Cornell Red Cloud: Campus-based Hybrid Cloud. Steven Lee Cornell University Center for Advanced Computing

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

Parallel Programming with MPI

Introduction, History

Update on NASA High End Computing (HEC) & Survey of Federal HEC Capabilities

Real Parallel Computers

HPC in Life Sciences. Bhanu Rekepalli Computational Scientist, JICS Adjunct Assistant Professor, EECS

Transcription:

Regional & National HPC resources available to UCSB Triton Affiliates and Partners Program (TAPP) Extreme Science and Engineering Discovery Environment (XSEDE) UCSB clusters https://it.ucsb.edu/services/supercomputing

Triton Affiliates and Partners Program (TAPP) http://tritonresource.sdsc.edu/ Encourages campus participation, allowing researchers to use time and storage space on SDSC computing and data resources. Campuses purchase blocks of time or storage which researchers may then request from their TAPP campus administrators.

TAPP - hardware http://tritonresource.sdsc.edu/hardware.php Component Features Node Details Data Analysis Facility Triton Compute Cluster Data Oasis (Planned) 28 Sun X4600M2 nodes 256 gb222x Appro blade nodes 2-4 Petabytes disk space 8 quad-core Shanghai 8380 2.5 GHz processors (8 @ 512 GB memory; 20 @ 256 GB memory ) 4 are dedicated local database nodes 2 quad-core Intel Nehalem 2.4 GHz processors 24 GB memory 20 TeraFlops peak 3000-6000 disks 60-120 GB per second throughput Myricom Myrinet Multiprotocol Switch with 448 MX ports, 32 ten-gigabit ethernet ports. Worst-case MPI latency of 2.4 microseconds and an achievable 1.2 gigabytes per second per network interconnection. Software: http://tritonresource.sdsc.edu/software.php

TAPP accounts http://www.oit.ucsb.edu/computing/supercomputing/accounts.asp Three types of accounts: Individual accounts Research group accounts Class accounts

TAPP accounts Individual accounts: < 1,000 hrs/yr (500 hrs/semester=6 mo), and used for training and start-up To apply, please use the TAPP Allocation Application Form (doc). Unused time rolls over. Research accounts: Max allocated time is 20,000 hrs/semester. To apply, please use the TAPP Allocation Application Form (doc). If you are requesting more than 5,000 hours per semester, you will also need to submit the TAPP Additional Usage Allocation Form (doc).

XSEDE [ex TeraGrid] XSEDE - Extreme Science and Engineering Discovery Environment - supports 20 supercomputers and high-end visualization and data analysis resources across the country https://www.xsede.org/ https://portal.xsede.org/

Campus Champions Program What is it? Supports campus representatives as a local source of knowledge about HPC, etc. Access to XSEDE and input to its staff Campus Champion Startup Research

XSEDE - hardware HPC resources + short overview + short user guide https://www.xsede.org/resources/overview HPC resources + full description + full user guide https://portal.xsede.org/resource-monitor

HPC RESOURCE NAME SITE MANUFACTURER / PLATFORM MACHINE TYPE PEAK TERAFLOP S DISK SIZE (TB) Gordon ION SDSC Appro Cluster 0 4000 Forge NCSA Dell PowerEdge C6145 with NVIDIA Fermi M2070 Cluster 150 600 Ranger TACC Sun Constellation System Cluster 579.3 1730 Kraken-XT5 NICS Cray XT5 MPP 1174 2400 Lonestar4 TACC Dell PowerEdge Westmere Linux Cluster Cluster 302 1000 Steele Purdue U Dell 1950 Cluster Cluster 66.59 130 Trestles SDSC Appro Cluster 100 140 Quarry Indiana U Dell AMD SMP 0 335 Blacklight PSC SGI UV 1000 cc-numa SMP 36 150 Keeneland Georgia Tech HP and NVIDIA Cluster 615 0 Advanced VIS Spur TACC Sun Visualization Cluster Cluster 1.13 1730 Longhorn TACC Dell / NVIDIA Visualization & Data Analysis Cluster Cluster 20.7 210 Nautilus NICS SGI SGI/NVIDIA, Visualization and Data Analysis System SMP 8.2 960 HTC Open Science Grid USC Various Linux Cluster 0 0 Condor Pool Purdue U Condor Pool Cluster 60 170

STORAGE RESOURCE NAME SITE MEDIA TYPE FILE SPACE TB IU Archival Storage (replicated or single copy) Indiana U Tape 2800 Data Supercell PSC Disk 4000 Dedicated (nonpurged) disk for databases and data collections Indiana U Disk 100 Lustre file space (IU Data Capacitor) Indiana U Disk 535 NCSA Tape Storage NCSA Tape 10000

XSEDE - software XSEDE software resources include software and services that are centrally supported, software that is supported locally by the service provider sites, software environment management tools, and software areas for installation of software maintained by the user and available to a specified community of users. https://www.xsede.org/software Comprehensive Software Search

XSEDE - accounts https://portal.xsede.org/ Create a portal account: Enter the Portal User Name Create Account Then e-mail: UserName & Requested Machine{s} & AllocationForm [http://www.oit.ucsb.edu/computing/supercomputing/accounts.asp] to: kadir@oit.ucsb.edu

XSEDE - accounts Individual accounts. < 30,000 hrs/year for training and getting acclimated; use the XSEDE Allocation Application Form (doc). [http://www.oit.ucsb.edu/computing/supercomputing/accounts.asp] For more than 30,000 hrs/year: submit a proposal to the XSEDE Allocations Committee [https://www.xsede.org/web/guest/new-allocation].

XSEDE Accounts Startup Allocations Reviewed continually throughout the year. Faculty & post-docs. Easy to request, fast approval time Up to 200,000 CPU-hours Research Allocations Handled by the XSEDE Resource Allocations Committee (XRAC), which meets quarterly.

Class Accounts Faculty members who wish to use highperformance computers for their classes may apply for time on the SDSC Triton and/or XSEDE systems. Applicants for a class account will need to supply: Instructor's resume Course title Course description Course syllabus Number of students https://it.ucsb.edu/services/supercomputing/accounts

Startup is easy to get! https://portal.xsede.org/submit-request

https://portal.xsede.org/web/guest/allocation-policies

Extended Collaborative Support Services (ECSS) https://www.xsede.org/ecss The Extended Collaborative Support Service (ECSS) pairs members of the XSEDE user community with expert staff members for an extended period to work together to solve challenging science and engineering problems through the application of cyberinfrastructure. 20

Queue prediction File manager Training Forums Documentation

File manager Queue prediction Training Forums Documentation

UCSB Resources