Longhorn Project TACC s XD Visualization Resource

Similar documents
Large Scale Remote Interactive Visualization

Interactively Visualizing Science at Scale

Visualization at TACC

Remote & Collaborative Visualization. Texas Advanced Computing Center

Overview of the Texas Advanced Computing Center. Bill Barth TACC September 12, 2011

Parallel Visualiza,on At TACC

Managing Terascale Systems and Petascale Data Archives

Regional & National HPC resources available to UCSB

University at Buffalo Center for Computational Research

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Remote and Collaborative Visualization

HPC Hardware Overview

LBRN - HPC systems : CCT, LSU

Parallel Visualization At TACC. Greg Abram

Parallel Visualization At TACC. Greg Abram

HPC Capabilities at Research Intensive Universities

Sun Lustre Storage System Simplifying and Accelerating Lustre Deployments

ABySS Performance Benchmark and Profiling. May 2010

Introduction to Visualization on Stampede

Wednesday, August 10, 11. The Texas Advanced Computing Center Michael B. Gonzales, Ph.D. Program Director, Computational Biology

TACC s Stampede Project: Intel MIC for Simulation and Data-Intensive Computing

CP2K Performance Benchmark and Profiling. April 2011

Overview and Introduction to Scientific Visualization. Texas Advanced Computing Center The University of Texas at Austin

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

NAMD GPU Performance Benchmark. March 2011

AcuSolve Performance Benchmark and Profiling. October 2011

Jetstream: A science & engineering cloud

Gateways to Discovery: Cyberinfrastructure for the Long Tail of Science

Computer Science Section. Computational and Information Systems Laboratory National Center for Atmospheric Research

XSEDE Visualization Use Cases

AMBER 11 Performance Benchmark and Profiling. July 2011

HMEM and Lemaitre2: First bricks of the CÉCI s infrastructure

LAMMPSCUDA GPU Performance. April 2011

Our Workshop Environment

LAMMPS-KOKKOS Performance Benchmark and Profiling. September 2015

XSEDE and XSEDE Resources

Performance Optimizations via Connect-IB and Dynamically Connected Transport Service for Maximum Performance on LS-DYNA

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

HIGH PERFORMANCE COMPUTING FROM SUN

Illinois Proposal Considerations Greg Bauer

Our Workshop Environment

HPC and IT Issues Session Agenda. Deployment of Simulation (Trends and Issues Impacting IT) Mapping HPC to Performance (Scaling, Technology Advances)

DELIVERABLE D5.5 Report on ICARUS visualization cluster installation. John BIDDISCOMBE (CSCS) Jerome SOUMAGNE (CSCS)

High Performance Computing Resources at MSU

GROMACS Performance Benchmark and Profiling. September 2012

NERSC Site Update. National Energy Research Scientific Computing Center Lawrence Berkeley National Laboratory. Richard Gerber

Our Workshop Environment

MERCED CLUSTER BASICS Multi-Environment Research Computer for Exploration and Discovery A Centerpiece for Computational Science at UC Merced

Lustre2.5 Performance Evaluation: Performance Improvements with Large I/O Patches, Metadata Improvements, and Metadata Scaling with DNE

Our Workshop Environment

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

PODShell: Simplifying HPC in the Cloud Workflow

Name Department/Research Area Have you used the Linux command line?

High Performance Computing with Accelerators

Comet Virtualization Code & Design Sprint

NAMD Performance Benchmark and Profiling. January 2015

HPC Enabling R&D at Philip Morris International

TeraGrid TeraGrid and the Path to Petascale

HPC 2 Informed by Industry

Architecting High Performance Computing Systems for Fault Tolerance and Reliability

Frequently Asked Questions

Organizational Update: December 2015

GROMACS (GPU) Performance Benchmark and Profiling. February 2016

Jetstream: Adding Cloud-based Computing to the National Cyberinfrastructure

John Fragalla TACC 'RANGER' INFINIBAND ARCHITECTURE WITH SUN TECHNOLOGY. Presenter s Name Title and Division Sun Microsystems

Our Workshop Environment

Workstations & Thin Clients

Altair RADIOSS Performance Benchmark and Profiling. May 2013

High Performance Computing (HPC) Using zcluster at GACRC

GROMACS Performance Benchmark and Profiling. August 2011

Choosing Resources Wisely. What is Research Computing?

The Cray CX1 puts massive power and flexibility right where you need it in your workgroup

Our Workshop Environment

Introduction to High-Performance Computing (HPC)

Introduction to HPC Using zcluster at GACRC

A New NSF TeraGrid Resource for Data-Intensive Science

Introduction to ARSC. David Newman (from Tom Logan slides), September Monday, September 14, 15

Our Workshop Environment

AcuSolve Performance Benchmark and Profiling. October 2011

Data Movement & Storage Using the Data Capacitor Filesystem

Research and Development Projects in CISL. Rich Loft Director, Technology Development Division, CISL October 6, 2011

NetApp High-Performance Storage Solution for Lustre

Analyzing Performance and Power of Applications on GPUs with Dell 12G Platforms. Dr. Jeffrey Layton Enterprise Technologist HPC

Future Trends in Hardware and Software for use in Simulation

HPC Architectures. Types of resource currently in use

The Hyperion Project: Collaboration for an Advanced Technology Cluster Testbed. November 2008

Getting Started with XSEDE. Dan Stanzione

Preparing GPU-Accelerated Applications for the Summit Supercomputer

STAR-CCM+ Performance Benchmark and Profiling. July 2014

Slurm at the George Washington University Tim Wickberg - Slurm User Group Meeting 2015

Making Supercomputing More Available and Accessible Windows HPC Server 2008 R2 Beta 2 Microsoft High Performance Computing April, 2010

FUSION1200 Scalable x86 SMP System

Future of Enzo. Michael L. Norman James Bordner LCA/SDSC/UCSD

Introduction to the NCAR HPC Systems. 25 May 2018 Consulting Services Group Brian Vanderwende

TACC Overview and Pecan Street Involvement

LAMMPS Performance Benchmark and Profiling. July 2012

AN INTRODUCTION TO CLUSTER COMPUTING

Sherlock for IBIIS. William Law Stanford Research Computing

Improving Application Performance and Predictability using Multiple Virtual Lanes in Modern Multi-Core InfiniBand Clusters

Parallel Computing at DESY Zeuthen. Introduction to Parallel Computing at DESY Zeuthen and the new cluster machines

Transcription:

Longhorn Project TACC s XD Visualization Resource DOE Computer Graphics Forum April 14, 2010

Longhorn Visualization and Data Analysis In November 2008, NSF accepted proposals for the Extreme Digital Resources for Science and Engineering The Longhorn project was proposed as a next generation response to TeraGrid s growing visualization and data analysis needs

Spur - Visualization System 128 cores, 1 TB distributed memory, 32 GPUs spur.tacc.utexas.edu login node, no GPUs don t run apps here! ivisbig.ranger Sun Fire X4600 server 8 AMD Opteron dual-core CPUs @ 3 GHz 256 GB memory 4 NVIDIA FX5600 GPUs ivis[1-7].ranger Sun Fire X4440 server 4 AMD Opteron quad-core CPUs @ 2.3 GHz 128 GB memory 4 NVIDIA FX5600 GPUs

Spur / Ranger topology spur vis queue Vis nodes ivis[1-7 big] $HOME login3.ranger login4.ranger normal development <etc> queues HPC nodes ixxx-xxx $WORK $SCRATCH Login Nodes Compute Nodes File System

Connecting to Spur laptop or workstation ssh <user>@spur.tacc.utexas.edu spur qsub /share/sge/default/pe_scripts/job.vnc touch ~/vncserver.out tail f ~/vncserver.out contains vnc port info after job launches ssh L <port>:spur.tacc.utexas.edu:<port> <user>@spur.tacc.utexas.edu laptop or spur workstation establishes secure tunnel to spur vnc port laptop or workstation vncviewer localhost::<port> localhost connection forwarded to spur via ssh tunnel spur automatic port forwarding to vis node VNC server on vis node ivis[1-7 big]

Spur Usage

XD Vis Requirements Analysis Surveyed members of the science community via personal interviews and email surveys Received ~60 individual responses

XD Vis Requirements Analysis Requirement % Users Requested User Support and Consulting 96% Large-Scale DAV Tools/Resources 39% Remote/Collaborative DAV Services 27% Computational Steering 10% In-simulation DAV Tools 6% Tools for 3D Measurement and Query 6% Tools for Multiple Length and Time 6% Scales (DAV = Data Analysis/Visualization)

Longhorn Configuration (256 Nodes, 2048 Cores, 512 GPUs, 14.5 TG Aggregate Memory) 256 Dell Quad Core Intel Nehalem Nodes 240 Nodes Dual socket, quad core per socket: 8 cores/node 48 GB shared memory/node (6 GB/core) 73 GB Local Disk 2 Nvidia GPUs/node (FX 5800-4GB RAM) 16 Nodes Dual socket, quad core per socket: 8 cores/node 144 GB shared memory/node (18 GB/core) 73 GB Local Disk 2 Nvidia GPUs/node (FX 5800 4GB RAM) ~14.5 TB aggregate memory QDR InfiniBand Interconnect Direct Connection to Ranger s Lustre Parallel File System 10G Connection to 210 TB Local Lustre Parallel File System Jobs launched through SGE

Longhorn s Lustre File System ($SCRATCH) OSS s on Longhorn are built on Dell Nehalem Servers Connected to MD10000 Storage Vaults 15 Drives Total Configured into 2 Raid5 pairs with a Wandering Spare Peak Throughput Speed of the File System is 5.86 GB/sec Peak Aggregate Speed of the File System is 5.43 GB/sec

Longhorn Partners and Roles: TACC (Kelly Gaither PI) Longhorn machine deployment User support Visualization and Data Analysis portal development Software/Tool development NCAR (John Clyne CoPI) User support VAPOR Enhancements University of Utah (Valerio Pascucci CoPI, Chuck Hansen) User support Software Integration of RTRT and topological analysis

Longhorn Partners and Roles: Purdue University (David Ebert CoPI) User support Integration of visual analytics software UC Davis (Hank Childs Chief Software Integration Architect) Directly facilitate tools being integrated into the VisIt software suite SURA (Linda Akli MSI Outreach/Broadening Participation)

Longhorn Usage Modalities: Remote/Interactive Visualization Highest priority jobs Remote/Interactive capabilities facilitated through VNC Run on 4 hour time limit GPGPU jobs Run on a lower priority than the remote/interactive jobs Run on 12 hour time limit CPU jobs with higher memory requirements Run on lowest priority when neither remote/interactive nor GPGPU jobs are waiting in the queue Run 12 hour time limit

Longhorn User Portal

Longhorn Queue Structure qsub -q normal -P vis

Longhorn Usage

Longhorn Usage

Longhorn Usage

Longhorn Usage Field of Science Since Production Field of Science Last 30 Days Field of Science Last 7 Days

Sampling of Current Projects Computational Study of Earth and Planetary Materials Simulation of Quantum Systems Visualization and Analysis of Turbulent Flow A probabilistic Molecular Dynamics Optimized for the GPU Visualization of Nano-Microscopy MURI on Biologically-Inspired Autonomous Sea Vehicles: Towards a Mission Configurable Stealth Underwater Batoid Adaptive Multiscale Simulations

Visualizing and Analyzing Large- Scale Turbulent Flow Detect, track, classify, and visualize features in large-scale turbulent flow. Collaborative effort between TACC, VACET and domain scientists. Paper submitted to IEEE Vis 2010. Thanks to Cyrus Harrison for developing connected components code in VisIt. Thanks to Hank Childs for developing chord length distribution code and visualization. Thanks to Wes Bethel for facilitating VisIt calculated connected components on a 4K^3 turbulence data in parallel using TACC's Longhorn machine. 2 million components were the collaboration! initially identified and then the map expression was used to select only the components that had total volume greater than 15. Data courtesy of P.K. Yeung & and Diego Donzis

Questions? Kelly Gaither kelly@tacc.utexas.edu