Fourteen years of Cactus Community

Similar documents
Erik Schnetter Rochester, August Friday, August 27, 2010

The Cactus Framework. Erik Schnetter September 2006

What is Cactus? Cactus is a framework for developing portable, modular applications

Cactus Framework: Scaling and Lessons Learnt

From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation

The Cactus Framework: Design, Applications and Future Directions. Cactus Code

Integration of Trilinos Into The Cactus Code Framework

Grid Computing in Numerical Relativity and Astrophysics

Datura The new HPC-Plant at Albert Einstein Institute

Parallelism. Wolfgang Kastaun. May 9, 2008

Introduction to the Einstein Toolkit Details

Dynamic Deployment of a Component Framework with the Ubiqis System

Introduction to the Cactus Framework

Cactus Tutorial. Introduction to Cactus. Yaakoub El Khamra. Cactus Developer, Frameworks Group CCT 27 March, 2007

High Performance and Grid Computing Applications with the Cactus Framework. HPCC Program Grand Challenges (1995)

A Scalable Adaptive Mesh Refinement Framework For Parallel Astrophysics Applications

arxiv: v1 [physics.comp-ph] 24 Jul 2013

ACCELERATING CFD AND RESERVOIR SIMULATIONS WITH ALGEBRAIC MULTI GRID Chris Gottbrath, Nov 2016

Component Specification in the Cactus Framework: The Cactus Configuration Language

Adaptive-Mesh-Refinement Hydrodynamic GPU Computation in Astrophysics

CMSC 714 Lecture 6 MPI vs. OpenMP and OpenACC. Guest Lecturer: Sukhyun Song (original slides by Alan Sussman)

AstroGrid-D. Advanced Prototype Implementation of Monitoring & Steering Methods. Documentation and Test Report 1. Deliverable D6.5

Adaptive Mesh Astrophysical Fluid Simulations on GPU. San Jose 10/2/2009 Peng Wang, NVIDIA

Cactus: Current Status and Future Plans

Experiences with ENZO on the Intel R Many Integrated Core (Intel MIC) Architecture

Parallel Algorithms: Adaptive Mesh Refinement (AMR) method and its implementation

Requirements for a new EOS interface in the Einstein Toolkit

Enabling In Situ Viz and Data Analysis with Provenance in libmesh

The Cactus Framework and Toolkit: Design and Applications

Adaptive Mesh Refinement in Titanium

Performance Profiling with Cactus Benchmarks

RAMSES on the GPU: An OpenACC-Based Approach

Improving the Eclipse Parallel Tools Platform in Support of Earth Sciences High Performance Computing

Enzo-P / Cello. Scalable Adaptive Mesh Refinement for Astrophysics and Cosmology. San Diego Supercomputer Center. Department of Physics and Astronomy

Development of an Integrated Modeling Framework for Simulations of Coastal Processes in Deltaic Environments Using High-Performance Computing

Improving the Eclipse Parallel Tools Platform to Create an Effective Workbench for High Performance Computing

Day 2 August 06, 2004 (Friday)

Experiences with ENZO on the Intel Many Integrated Core Architecture

Performance and Optimization Abstractions for Large Scale Heterogeneous Systems in the Cactus/Chemora Framework

The Prickly Pear Archive: A Portable Hypermedia for Scholarly Publication

Damaris. In-Situ Data Analysis and Visualization for Large-Scale HPC Simulations. KerData Team. Inria Rennes,

Development Environments for HPC: The View from NCSA

Framework Middleware Bridging Large-apps and Hardware

computational Fluid Dynamics - Prof. V. Esfahanian

Developing the TELEMAC system for HECToR (phase 2b & beyond) Zhi Shang

Implementation of an integrated efficient parallel multiblock Flow solver

Introducing Overdecomposition to Existing Applications: PlasComCM and AMPI

NIA CFD Seminar, October 4, 2011 Hyperbolic Seminar, NASA Langley, October 17, 2011

This is an author-deposited version published in: Eprints ID: 4362

High-Performance Scientific Computing

Integrating Web 2.0 Technologies with Scientific Simulation Codes for Real-Time Collaboration

A Multiscale Non-hydrostatic Atmospheric Model for Regional and Global Applications

Using the Eclipse Parallel Tools Platform in Support of Earth Sciences High Performance Computing

VisIt Libsim. An in-situ visualisation library

PETSc Satish Balay, Kris Buschelman, Bill Gropp, Dinesh Kaushik, Lois McInnes, Barry Smith

Computational Astrophysics 5 Higher-order and AMR schemes

Multi-GPU Scaling of Direct Sparse Linear System Solver for Finite-Difference Frequency-Domain Photonic Simulation

High Performance Computing

Large-scale Gas Turbine Simulations on GPU clusters

ORAP Forum October 10, 2013

What is Multigrid? They have been extended to solve a wide variety of other problems, linear and nonlinear.

AACE: Applications. Director, Application Acceleration Center of Excellence National Institute for Computational Sciences glenn-

GPUs and Einstein s Equations

PREPARING AN AMR LIBRARY FOR SUMMIT. Max Katz March 29, 2018

General Plasma Physics

IOS: A Middleware for Decentralized Distributed Computing

Scientific Computing at Million-way Parallelism - Blue Gene/Q Early Science Program

Code Auto-Tuning with the Periscope Tuning Framework

Harp-DAAL for High Performance Big Data Computing

ADAPTIVE FINITE ELEMENT

Uncertainty Analysis: Parameter Estimation. Jackie P. Hallberg Coastal and Hydraulics Laboratory Engineer Research and Development Center

A dynamic load-balancing strategy for large scale CFD-applications

CSC 2700: Scientific Computing

Optimization of PIERNIK for the Multiscale Simulations of High-Redshift Disk Galaxies

Multigrid Solvers in CFD. David Emerson. Scientific Computing Department STFC Daresbury Laboratory Daresbury, Warrington, WA4 4AD, UK

Partial Differential Equations

A Software Developing Environment for Earth System Modeling. Depei Qian Beihang University CScADS Workshop, Snowbird, Utah June 27, 2012

Enzo-P / Cello. Formation of the First Galaxies. San Diego Supercomputer Center. Department of Physics and Astronomy

Radial Basis Function-Generated Finite Differences (RBF-FD): New Opportunities for Applications in Scientific Computing

Introduction to 3D Scientific Visualization. Training in Visualization for PRACE Summer of HPC 2013 Leon Kos, University of Ljubljana, Slovenia

Visualization and Data Analysis using VisIt - In Situ Visualization -

Lagrangian methods and Smoothed Particle Hydrodynamics (SPH) Computation in Astrophysics Seminar (Spring 2006) L. J. Dursi

Dynamic Selection of Auto-tuned Kernels to the Numerical Libraries in the DOE ACTS Collection

Introduction to Multigrid and its Parallelization

Interdisciplinary practical course on parallel finite element method using HiFlow 3

Grid Programming Models: Current Tools, Issues and Directions. Computer Systems Research Department The Aerospace Corporation, P.O.

1 st International Serpent User Group Meeting in Dresden, Germany, September 15 16, 2011

Advective and conservative semi-lagrangian schemes on uniform and non-uniform grids

Design Approach for a Generic and Scalable Framework for Parallel FMU Simulations

Ocean Modeling. Infrastructure (COMI) at LSU

Development of an Integrated Modeling Framework for Simulations of Coastal Processes in Deltaic Environments Using High-Performance Computing

HPC Application Porting to CUDA at BSC

Evaluating New Communication Models in the Nek5000 Code for Exascale

A Scalable GPU-Based Compressible Fluid Flow Solver for Unstructured Grids

Computational Steering

1.2 Numerical Solutions of Flow Problems

Advanced High Performance Computing CSCI 580

Unstructured Grid Numbering Schemes for GPU Coalescing Requirements

First Steps of YALES2 Code Towards GPU Acceleration on Standard and Prototype Cluster

How TMG Uses Elements and Nodes

Transcription:

Fourteen years of Cactus Community Frank Löffler Center for Computation and Technology Louisiana State University, Baton Rouge, LA September 6th 2012

Outline Motivation scenario from Astrophysics Cactus structure technically Cactus structure socially Future directions Frank Lo ffler Fourteen years of Cactus Community 2012-09-06

Challenging Astrophysics Problems Black Holes and Neutron Stars Supernovae Cosmology Gravitational Wave Data Analysis Frank Lo ffler Fourteen years of Cactus Community 2012-09-06

Gravitational Wave Physics

Solving Einstein s Equations Einstein equations: G µν = 8πT µν 12 fully 2nd order PDE evolution equations 4 coupled constraint equations 4 gauge conditions GR hydrodynamics, MHD, radiation transport Fully numerical 3D models needed

Computational Requirements Unigrid scales to hundreds of thousands of cores Productions runs use 10 levels of mesh refinement, nested grids of size 60x60x60 Current mesh refinement runs scale up to 10k cores Runtime from weeks to few months

Challenge Many scientific/engineering components Physics Mathematics CFD Many numerical algorithm components Finite difference, finite volume, spectral methods Structured or unstructured meshes, mesh refinements Multipatch and multimodel Many different computational components Parallelism (MPI, OpenMP,...) Parallel I/O (e.g. Checkpointing) Visualization Challenge Defining good abstractions to bring these together in a unified, scalable framework, enabling science

Cactus Framework Structure

Cactus Core: The Flesh ANSI-C and Perl, C++ Independent of all other components Unified error handling Build system Parameter parsing/steering Global variable management Rule-based scheduler Extensible APIs Parallel Operations Input/Output Reduction Interpolation Timers Functionality provided by (swappable) components

Cactus Components: Thorns C, C++, Fortran 77, Fortran 90 Typically not implementing grid setup and memory allocation input/output interpolation, reductions Encapsulating some functionality initial data boundary conditions evolution systems equations of state remote steering (e.g. https server)

Basis Module Overview Basis for scalable algorithm development Most used: finite differences on structured meshes Parallel driver components Simple Unigrid Carpet: Multipatch, Mesh-refinement Method of lines Interfaces to external Libraries/Tools Interface to elliptic solvers (e.g. PETSc, Lorene) Input/Output: HDF5 Visualization: VisIt, OpenDX, Vish Other: PAPI, Hypre, Saga, Flickr, Twitter

Convenience Tools GetComponents Simfactory Formaline

Cactus as growing project Cactus

Cactus as growing project Cactus

Cactus as growing project Cactus

Cactus as growing project Cactus

Cactus as growing project Cactus

Social structure Few Core Members: Gabrielle Allen, Steven R. Brandt, Frank Löffler, Erik Schnetter,... Developers: about 50 worldwide Many more users Cactus Community Open Source Yearly releases Mailing lists Issue tracker IRC support channel Tutorials Web, HPC allocations, Repositories,...

Guiding Principles Open, community-driven software development Separation of physics software and computational infrastructure Stable interfaces, allowing extensions Simplify usage where possible: Doing science >> Running a simulation Students need to know a lot about physics (meaningful initial conditions, numerical stability, accuracy/resolution, have patience, have curiosity, develop a gut feeling for what is right...) Cactus Toolkit cannot give that, however: Open codes that are easy to use allow to concentrate on these things!

Credits, Citations In academics: citations, citations, citations! In Cactus: Open and free source No requirement to cite anything However: requested to cite a few publications Which publications: Few for the Cactus framework Some components list a few as well List published on website and manage through publication database

Future From certain to more speculative: Multiblock techniques GPU support Requirement-based scheduling Discontinuous Galerkin instead of finite differences Support for unstructured grids Completely requirement-based programming (MPI ParalleX?)

Tools: GetComponents Task: Collect software from various repositories at different sites Example simulation assembly: Cactus Flesh and Toolkit (svn.cactuscode.org) Core Einstein Toolkit (svn.einsteintoolkit.org) Carpet AMR (carpetcode.org, hg) Tools, Parameter Files and Data (svn.einsteintoolkit.org) Group Modules (x.groupthorns.org) Individual Modules (x.mythorns.org) x: cvs, svn, darcs, git, hg, http

Tools: Simulation Factory http://www.simfactory.org/ Task: Provide support for common, repetitive steps: Access remote systems, synchronize source code trees Configure and build on different systems semi-automatically Provide maintained list of supercomputer configurations Manage simulations (follow best practices, avoid human errors)

Tools: Formaline Task: Ensure that simulations are and remain repeatable, remember exactly how they were performed Take snapshots of source code, system configuration; store it in executable and/or git repository Tag all output files