Introduction to FREE National Resources for Scientific Computing. Dana Brunson. Jeff Pummill

Similar documents
TeraGrid TeraGrid and the Path to Petascale

ALCF Argonne Leadership Computing Facility

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions

Regional & National HPC resources available to UCSB

Leveraging the InCommon Federation to access the NSF TeraGrid

Cyberinfrastructure!

Overview of HPC at LONI

ACCI Recommendations on Long Term Cyberinfrastructure Issues: Building Future Development

Interconnection of Armenian e- Infrastructures with the pan- Euroepan Integrated Environments

ALICE Grid Activities in US

GRIDS INTRODUCTION TO GRID INFRASTRUCTURES. Fabrizio Gagliardi

Cyberinfrastructure Framework for 21st Century Science & Engineering (CIF21)

The IRIS Data Management Center maintains the world s largest system for collecting, archiving and distributing freely available seismological data.

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber

High Performance Computing from an EU perspective

National Level Computing at UTK. Jim Ferguson NICS Director of Education, Outreach & Training August 19, 2011

2017 Resource Allocations Competition Results

Magellan Project. Jeff Broughton NERSC Systems Department Head October 7, 2009

Data publication and discovery with Globus

HPC IN EUROPE. Organisation of public HPC resources

Co-existence: Can Big Data and Big Computation Co-exist on the Same Systems?

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Mass Big Data: Progressive Growth through Strategic Collaboration

e-infrastructure: objectives and strategy in FP7

The Blue Water s File/Archive System. Data Management Challenges Michelle Butler

The Constellation Project. Andrew W. Nash 14 November 2016

Engagement With Scientific Facilities

HPC Capabilities at Research Intensive Universities

Journey Towards Science DMZ. Suhaimi Napis Technical Advisory Committee (Research Computing) MYREN-X Universiti Putra Malaysia

The National Fusion Collaboratory

Moving e-infrastructure into a new era the FP7 challenge

Jülich Supercomputing Centre

igeni: International Global Environment for Network Innovations

RESEARCH DATA DEPOT AT PURDUE UNIVERSITY

The EPIKH, GILDA and GISELA Projects

Big Data 2015: Sponsor and Participants Research Event ""

NSF Campus Cyberinfrastructure and Cybersecurity for Cyberinfrastructure PI Workshop

APAC: A National Research Infrastructure Program

Update on Cray Activities in the Earth Sciences

Decommissioning Plan. Jenni Evans

Tools for Handling Big Data and Compu5ng Demands. Humani5es and Social Science Scholars

Registration Workshop. Nov. 1, 2017 CS/SE Freshman Seminar

Research Supercomputing: Linking Chains of Knowledge through the Grid

KerData: Scalable Data Management on Clouds and Beyond

An Overview of CSNY, the Cyberinstitute of the State of New York at buffalo

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

Geoffrey Dengate and Dr. P.T. Ho. 8 th July, Agenda. The University of Hong Kong (HKU) JUCC and HARNET

e-infrastructures in FP7 INFO DAY - Paris

EUDAT & SeaDataCloud

BUCKNELL S SCIENCE DMZ

DataONE: Open Persistent Access to Earth Observational Data

Damaris. In-Situ Data Analysis and Visualization for Large-Scale HPC Simulations. KerData Team. Inria Rennes,

Part 2: Computing and Networking Capacity (for research and instructional activities)

arxiv: v1 [hep-lat] 13 Jun 2008

Getting Started with XSEDE. Dan Stanzione

UNCLASSIFIED. UNCLASSIFIED R-1 Line Item #49 Page 1 of 10

CISL Update. 29 April Operations and Services Division

Case Study: CyberSKA - A Collaborative Platform for Data Intensive Radio Astronomy

PetaLibrary Storage Service MOU

SPARC 2 Consultations January-February 2016

Harnessing Grid Resources to Enable the Dynamic Analysis of Large Astronomy Datasets

Center for Ultra-wide-area Resilient Electric Energy Transmission Networks. center overview. an NSF/DOE Engineering Research Center

Jack Dongarra University of Tennessee Oak Ridge National Laboratory University of Manchester

SARA SUPERDAG. Amsterdam NL. December 1, Merle Giles Private Sector Program & Economic Development

Scaling a Global File System to the Greatest Possible Extent, Performance, Capacity, and Number of Users

Comparison of Scheduling Policies and Workloads on the NCCS and NICS XT4 Systems at Oak Ridge National Laboratory

Next Generation Integrated Architecture SDN Ecosystem for LHC and Exascale Science. Harvey Newman, Caltech

Ioan Raicu Distributed Systems Laboratory Computer Science Department University of Chicago

irods at TACC: Secure Infrastructure for Open Science Chris Jordan

A Data Diffusion Approach to Large Scale Scientific Exploration

Present and Future Leadership Computers at OLCF

Universities Access IBM/Google Cloud Compute Cluster for NSF-Funded Research

einfrastructures Concertation Event

High Performance Computing Data Management. Philippe Trautmann BDM High Performance Computing Global Research

Advancing Library Cyberinfrastructure for Big Data Sharing and Reuse. Zhiwu Xie

Oak Ridge National Laboratory Computing and Computational Sciences

BOARD OF REGENTS ACADEMIC AFFAIRS COMMITTEE 4 STATE OF IOWA SEPTEMBER 12-13, 2018

HPC Saudi Jeffrey A. Nichols Associate Laboratory Director Computing and Computational Sciences. Presented to: March 14, 2017

Data Curation Practices at the Oak Ridge National Laboratory Distributed Active Archive Center

Improving Network Infrastructure to Enable Large Scale Scientific Data Flows and Collaboration (Award # ) Klara Jelinkova Joseph Ghobrial

Preparing GPU-Accelerated Applications for the Summit Supercomputer

HPC Resources & Training

Federated Services for Scientists Thursday, December 9, p.m. EST

DDN Annual High Performance Computing Trends Survey Reveals Rising Deployment of Flash Tiers & Private/Hybrid Clouds vs.

Výpočetní zdroje IT4Innovations a PRACE pro využití ve vědě a výzkumu

Outline. March 5, 2012 CIRMMT - McGill University 2

Introduction to High-Performance Computing

Higher Education in Texas: Serving Texas Through Transformational Education, Research, Discovery & Impact

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

Guidelines for Efficient Parallel I/O on the Cray XT3/XT4

Philip C. Roth. Computer Science and Mathematics Division Oak Ridge National Laboratory

Edward Seidel Director, National Center for Supercomputing Applications Founder Prof. of Physics, U of Illinois

PROFESSIONAL MASTER S IN

Large Data Visualization

IUPUI eportfolio Grants Request for Proposals for Deadline: March 1, 2018

Certified Building Commissioning Professional (CBCP )

SLATE. Services Layer at the Edge. First Meeting of the National Research Platform Montana State University August 7-8, 2017

Jetstream Overview A national research and education cloud

MADNESS. Rick Archibald. Computer Science and Mathematics Division ORNL

The CISM Education Plan (updated August 2006)

Transcription:

Introduction to FREE National Resources for Scientific Computing Dana Brunson Oklahoma State University High Performance Computing Center Jeff Pummill University of Arkansas High Peformance Computing Center October 6, 2010

Questions What free national resources are available? Who can use them? What hardware, software, and training materials do they offer and how do I find them? Do they have advanced user support? Where can I find more information?

Resource Providers DoD Supercomputing Resource Centers (DRSCs) http://www.hpcmo.hpc.mil/cms2/index.php/hpccenters DoE Innovative and Novel Computational Impact on Theory and Experiment (INCITE) http://www.er.doe.gov/ascr/incite/ NASA Advanced Supercomputing Division (NAS) http://www.nas.nasa.gov Open Science Grid (OSG) http://www.opensciencegrid.org/ TeraGrid https://www.teragrid.org/ October 6, 2010 3

Department of Defense Supercomputing Resource Centers October 6, 2010 4

Department of Defense Supercomputing Resource Centers (DSRCs) October 6, 2010 5

DoD DRSCs DoD Projects requiring HPC are eligible. Variety of clusters and shared memory systems. Consolidated hardware list: http://www.afrl.hpc.mil/consolidated/hardware.php Many programming and analysis codes available. Consolidated software list: http://www.afrl.hpc.mil/consolidated/softwareall.php User support services begins with Consolidated Customer Assistance Center (CCAC) and can continue with more specialized user assistance. More information at: http://www.hpcmo.hpc.mil/cms2/index.php/hpccenters October 6, 2010 6

Department of Energy Innovative and Novel Computational Impact on Theory and Experiment (INCITE) October 6, 2010 7

Department of Energy Innovative and Novel Computational Impact on Theory and Experiment (INCITE) Open to all researchers and research organizations academic, governmental, and industrial, through peer review. Intent is to serve very large scale computationally intensive projects. Argonne National Lab and Oak Ridge National Lab host computational resources. More information: http://hpc.science.doe.gov/allocations/incite/faq.do October 6, 2010 8

Intrepid IBM Blue Gene/P at ANL Intrepid 163,840 Compute Cores #8 on Top500 Nov '09 October 6, 2010 9

Jaguar Cray XT at ORNL Jaguar 224,162 Compute Cores #1 on Top500 Nov '09 October 6, 2010 10

NASA Advanced Supercomputing Division (NAS) October 6, 2010 11

NASA Advanced Supercomputing Division (NAS) NASA projects are supported. NAS' current high-end computing environment includes three supercomputers, and a 25-petabyte mass storage system for long-term data storage. NAS has pioneered many technologies and techniques that have become standards for integrating supercomputers into a production environment. http://www.nas.nasa.gov/about/legacy.html October 6, 2010 12

Pleiades at Ames Research Center Pleiades 56,320 Compute Cores #6 on Top500 Nov '09 October 6, 2010 13

Open Science Grid (OSG) October 6, 2010 14

Open Science Grid (OSG) OSG brings together computing and storage resources from campuses and research communities into a common, shared grid infrastructure over research networks via a common set of middleware. OSG offers participating research communities low-threshold access to more resources than they could afford individually, via a combination of dedicated, scheduled and opportunistic alternatives. OSG is a consortium of software, service and resource providers and researchers, from universities, national laboratories and computing centers across the U.S., who together build and operate the OSG project. The project is funded by the NSF and DOE, and provides staff for managing various aspects of the OSG. OSG provides training through hands-on workshops and focused engagement with the community, helping new users to run applications on the infrastructure and resource owners to make their compute and storage resources accessible to the grid. October 6, 2010 15

OSG: How it works Resource owners register their resource with the OSG. Scientific researchers gain access to these resources by registering with one or more Virtual Organizations (VOs). The VO administrators Register their VOs with the OSG. All members of the VO who have signed the acceptable use policy (AUP) are allowed to access OSG resources, subject to the policies of the resource owners. Each resource and each VO is supported by a designated, and in some cases shared, "Support Center (SC)," determined at registration time. October 6, 2010 16

October 6, 2010 17

TeraGrid October 6, 2010 18

What is TeraGrid? TeraGrid is an open scientific discovery infrastructure combining leadership class resources at eleven partner sites to create an integrated, persistent computational resource. Using high-performance network connections, the TeraGrid integrates highperformance computers, data resources and tools, and high-end experimental facilities around the country. Currently, TeraGrid resources include more than a petaflop of computing capability and more than 30 petabytes of online and archival data storage, with rapid access and retrieval over high-performance networks. Researchers can also access more than 100 discipline-specific databases. With this combination of resources, the TeraGrid is the world's largest, most comprehensive distributed cyberinfrastructure for open scientific research. October 6, 2010 19

October 6, 2010 20

TeraGrid Who is Eligible? To qualify for an allocation, the principal investigator (PI) must be a researcher or educator at a U.S. academic or non-profit research institution. A qualified advisor may apply for an allocation for his or her class, but a high school, undergraduate or graduate student may not be a PI. A postdoctoral researcher can also be a PI. (After receiving an allocation, PIs can request that students be given accounts to use the allocation.) In general, TeraGrid follows the guidelines described in the current NSF Grant Proposal Guide. However, investigators with support from any funding source, not just NSF, are encouraged to apply. If your institution is not a university or a two- or four-year college, special rules may apply. Contact help@teragrid.org or your local Campus Champion for details. October 6, 2010 21

Kraken 98,928 Compute Cores #3 on Top500 Nov '09 October 6, 2010 22

Ranger 62,976 Compute Cores #9 on Top500 Nov '09 October 6, 2010 23

and in 2011... October 6, 2010 24

...IBM Blue Waters! With more than 300,000 compute cores, Blue Waters will achieve peak performance of approximately 10 petaflops (10 quadrillion calculations every second) and will deliver sustained performance of at least 1 petaflop on a range of real-world science and engineering applications. Blue Waters will have: - a peak memory bandwidth of nearly 5 petabytes/second - more than 1 petabyte of memory - more than 18 petabytes of disk storage - more than 500 petabytes of archival storage. The base system and computing environment will be enhanced through a multi-year collaboration among NCSA, the University of Illinois, IBM, and members of the Great Lakes Consortium for Petascale Computation. The enhanced environment will increase the productivity of application developers, system administrators, and researchers by providing an integrated toolset to use Blue Waters and analyze and control its behavior. October 6, 2010 25

Advanced Support for TeraGrid Applications (ASTA) Advanced Support for TeraGrid Applications (ASTA) provides collaboration between Advanced User Support (AUS) staff and users of TeraGrid resources. The objective of the program is to enhance the effectiveness and productivity of scientists and engineers. As a part of the ASTA program, guided by the allocation process, one or multiple AUS staff will join the principle investigator's (PI's) team to collaborate for up to a year, working with users' applications. AUS staff from TeraGrid resource provider sites have expertise and experience in many areas of high performance computing, domain sciences, data visualization and analysis, data management, and grid computing. Collaborative work can include any of the following: -Porting applications to new resources -Providing help for portal and gateway development -Implementing algorithmic enhancements -Implementing parallel math libraries -Improving the scalability of codes to higher processor counts -Optimizing codes to efficiently utilize specific resources -Assisting with visualization, workflow, and data analysis/transfer -Developing a Science Gateway October 6, 2010 26

Teragrid Science Gateways Science Gateway is an integrated, community-developed set of tools, applications, and data. Access is usually via a portal or a suite of applications, often in a graphical user interface, that is further customized to meet the needs of a targeted community. Most existing gateways are available for use by anyone. Existing science gateways include resources for astronomy, chemistry, earthquake mitigation, geophysics, global atmospheric research, biology and neuroscience, cognitive science, molecular biology, physics and seismology, among others. Current list: https://www.teragrid.org/web/science-gateways/gateway_list October 6, 2010 27

Questions? October 6, 2010 28