Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011

Similar documents
Regional & National HPC resources available to UCSB

Managing Terascale Systems and Petascale Data Archives

HPC Capabilities at Research Intensive Universities

Large Scale Remote Interactive Visualization

Longhorn Project TACC s XD Visualization Resource

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology

HPC Hardware Overview

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing

Getting Started with XSEDE. Dan Stanzione

LBRN - HPC systems : CCT, LSU

University at Buffalo Center for Computational Research

Remote & Collaborative Visualization. Texas Advanced Computing Center

XSEDE and XSEDE Resources

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

WVU RESEARCH COMPUTING INTRODUCTION. Introduction to WVU s Research Computing Services

Overview and Introduction to Scientific Visualization. Texas Advanced Computing Center The University of Texas at Austin

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

TACC Overview and Pecan Street Involvement

Interactively Visualizing Science at Scale

The Cray CX1 puts massive power and flexibility right where you need it in your workgroup

irods at TACC: Secure Infrastructure for Open Science Chris Jordan

NUIT Tech Talk Topics in Research Computing: XSEDE and Northwestern University Campus Champions

Illinois Proposal Considerations Greg Bauer

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

XSEDE s Campus Bridging Project Jim Ferguson National Institute for Computational Sciences

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Parallel Visualiza,on At TACC

HPC 2 Informed by Industry

SuperMike-II Launch Workshop. System Overview and Allocations

Outline. March 5, 2012 CIRMMT - McGill University 2

Ioan Raicu. Everyone else. More information at: Background? What do you want to get out of this course?

MERCED CLUSTER BASICS Multi-Environment Research Computer for Exploration and Discovery A Centerpiece for Computational Science at UC Merced

HPC Enabling R&D at Philip Morris International

Parallel Visualization At TACC. Greg Abram

CISL Update. 29 April Operations and Services Division

Jetstream: Adding Cloud-based Computing to the National Cyberinfrastructure

Data Movement & Storage Using the Data Capacitor Filesystem

Advanced Research Compu2ng Informa2on Technology Virginia Tech

John Fragalla TACC 'RANGER' INFINIBAND ARCHITECTURE WITH SUN TECHNOLOGY. Presenter s Name Title and Division Sun Microsystems

TeraGrid TeraGrid and the Path to Petascale

High Performance Computing and Data Resources at SDSC

High Performance Computing Resources at MSU

Future of Enzo. Michael L. Norman James Bordner LCA/SDSC/UCSD

Overview of HPC at LONI

NCAR s Data-Centric Supercomputing Environment Yellowstone. November 28, 2011 David L. Hart, CISL

NAMD GPU Performance Benchmark. March 2011

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

Datura The new HPC-Plant at Albert Einstein Institute

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

XSEDE New User Training. Ritu Arora November 14, 2014

PART-I (B) (TECHNICAL SPECIFICATIONS & COMPLIANCE SHEET) Supply and installation of High Performance Computing System

Parallel Visualization At TACC. Greg Abram

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)

The Blue Water s File/Archive System. Data Management Challenges Michelle Butler

SARA Overview. Walter Lioen Group Leader Supercomputing & Senior Consultant

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Scaling a Global File System to the Greatest Possible Extent, Performance, Capacity, and Number of Users

The Stampede is Coming: A New Petascale Resource for the Open Science Community

SGI Overview. HPC User Forum Dearborn, Michigan September 17 th, 2012

LQCD Facilities at Jefferson Lab. Chip Watson May 6, 2011

HMEM and Lemaitre2: First bricks of the CÉCI s infrastructure

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters

InfiniBand Strengthens Leadership as the Interconnect Of Choice By Providing Best Return on Investment. TOP500 Supercomputers, June 2014

NCAR s Data-Centric Supercomputing Environment Yellowstone. November 29, 2011 David L. Hart, CISL

High Performance Computing with Accelerators

NCAR Globally Accessible Data Environment (GLADE) Updated: 15 Feb 2017

Habanero Operating Committee. January

Comet Virtualization Code & Design Sprint

LS-DYNA Best-Practices: Networking, MPI and Parallel File System Effect on LS-DYNA Performance

Introduction to ARSC. David Newman (from Tom Logan slides), September Monday, September 14, 15

DVS, GPFS and External Lustre at NERSC How It s Working on Hopper. Tina Butler, Rei Chi Lee, Gregory Butler 05/25/11 CUG 2011

How to Use a Supercomputer - A Boot Camp

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

The Path to Petascale Science

Architecting High Performance Computing Systems for Fault Tolerance and Reliability

Enabling the Smart Grid through Big Data

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research

AcuSolve Performance Benchmark and Profiling. October 2011

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

XSEDE New User/Allocation Mini-Tutorial

Organizational Update: December 2015

Architecting Storage for Semiconductor Design: Manufacturing Preparation

ANSYS HPC. Technology Leadership. Barbara Hutchings ANSYS, Inc. September 20, 2011

Operating two InfiniBand grid clusters over 28 km distance

Parallel Programming on Ranger and Stampede

Resources Current and Future Systems. Timothy H. Kaiser, Ph.D.

The Impact of Inter-node Latency versus Intra-node Latency on HPC Applications The 23 rd IASTED International Conference on PDCS 2011

HPC Solution. Technology for a New Era in Computing

Visualization at TACC

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

SUN CUSTOMER READY HPC CLUSTER: REFERENCE CONFIGURATIONS WITH SUN FIRE X4100, X4200, AND X4600 SERVERS Jeff Lu, Systems Group Sun BluePrints OnLine

Headline in Arial Bold 30pt. Visualisation using the Grid Jeff Adie Principal Systems Engineer, SAPK July 2008

HPC Architectures. Types of resource currently in use

2012 HPC Advisory Council

ANSYS HPC Technology Leadership

Umeå University

Umeå University

LCE: Lustre at CEA. Stéphane Thiell CEA/DAM

Transcription:

Overview of the Texas Advanced Computing Center Bill Barth TACC September 12, 2011

TACC Mission & Strategic Approach To enable discoveries that advance science and society through the application of advanced computing technologies. Resources & Services Evaluate, acquire & operate world-class resources Provide expert support via leading technology expertise Research & Development Produce new computational technologies and techniques Collaborate with researchers to apply advanced computing technologies in science projects

TACC Technology Focus Areas High Performance Computing (HPC) Performance benchmarking, analysis, optimization Linear algebra and solvers CFD, computational chemistry, weather/ocean modeling, computational biomedicine Data & Information Analysis (DIA) Scientific visualization Scientific data collections management Data analysis & mining Advanced Computing Interfaces Portals & gateways Middleware for job scheduling, workflow, orchestration

TACC HPC/DATA Systems System Ranger Lonestar* Longhorn Purpose HPC HPC Data Analysis Nodes 3,936 1,888 256 CPUs/node x cores/cpus 4 x 4 2 x 6 2 x 4 + 2GPUs Total cores 62,976 22,656 2,048 CPUS AMD Barcelona 2.3GHz Intel Westmere 3.3GHz Intel Nehalem +NVIDIA 2.5 GHz +Quadro Plex S4s Memory 2GB/core 2GB/core 6GB/core (240 nodes) 18GB/core (16 nodes) Interconnect SDR IB QDR IB QDR IB Disk 1.7PB Lustre (IB) 1PB Lustre (IB) 0.2PB Lustre (10GigE) * Replacement of present Lonestar in Jan. 2011

Storage Systems High Speed Disk-- Corral 1 PB Data Direct Disk 800TB Lustre File System 200TB Data Collections InfiniBand interconnect Access: as /corral file system on ranger, lonestar and longhorn; ssh/scp; requires allocation Tape Storage -- Ranch 10PB capacity 70 TB cache 10Gb Ethernet interconnect Access: scp/bbcp to ranch.tacc.utexas.edu; or rsh/ssh DDN S2A 9900 Disk STK SL8500 Tape Lib

TACC Advanced Visualization Systems Spur: Sun Remote Visualization System 8 servers, 32 Nvidia Quadroplex GPUs 1.125TB total memory, 256GB in one server On Ranger InfiniBand fabric Direct access to Ranger file systems New ACES Vislab 15x5 Tiled Display Wall, 307 MPixels, Nvidia GPUs SONY 9MPixel Projector, 20ft x 11ft display 4 Dell High-end Workstations Collaboration/conference room

TACC Support Services Technical documentation http://www.tacc.utexas.edu/ (user guides!) Training http://www.tacc.utexas.edu/services/training/ Taught on-site, sign up at TACC User Portal Or Everything through the TACC Portal (consulting) http://portal.tacc.utexas.edu/

XSEDE extreme Digital Resources for Science and Engineering A national federation of NSF-funded advanced computing resource and service providers Portal: http://portal.xsede.org Information Allocations Access Help

Using TACC XSEDE Resources 11 Centers 1.5 Billion core-hrs/yr Startup, Research & Instructional Allocations

TG Allocation XSEDE Allocation Request, Types Requests of Projects Types of Projects Startup Development/testing/ porting/benchmarking Research Program (usually funded) Education Classroom, Training Up to 200,000 core-hrs., for 1yr Submit Abstract, Awarded/2 wks Unlimited core-hrs, for 1yr 10 page Request, Awarded/quarter Up to 200,000 core-hrs, for 1 yr Submit Abstract, Awarded/2 wks https://portal.xsede.org/allocations-overview

Globus Online Training Webcast September 15, 2011, 1PM CDT https://www.xsede.org/web/xup/coursecalendar 11 www.globusonline.org

What is Globus Online? Initial implementation of XSEDE User Access Services (XUAS) Reliable data movement service High performance: Move terabytes of data in thousands of files Automatic fault recovery Across multiple security domains Designed for researchers Easy fire and forget file transfers No client software installation New features automatically available Consolidated support and troubleshooting Works with existing GridFTP servers Ability to move files to any machine (even your laptop) with ease 12 "We have been using Globus Online to move files to a TeraGrid cluster where we analyze and store tens of terabytes of data... I plan to continue using GO to access these resources within XSEDE to easily get my files where they need to go. -- University of Washington user The service is reliable and easy to use, and I look forward to continuing to use it with XSEDE. I've also used the Globus Connect feature to move files from TeraGrid sites to other machines -- this is a very useful feature which I'm sure XSEDE users will want to take advantage of. -- NCSA user www.globusonline.org

More About TACC: Texas Advanced Computing Center www.tacc.utexas.edu info@tacc.utexas.edu (512) 475-9411