Molecular dynamics simulations in the MolDynGrid Virtual Laboratory by means of ARC between Grid and Cloud

Similar documents
INTERACTIVE ENVIRONMENT FOR MASSIVE NEUROSCIENCE SIMULATIONS IN GRID

Grid Experiment and Job Management

Architecture Proposal

Garuda : The National Grid Computing Initiative Of India. Natraj A.C, CDAC Knowledge Park, Bangalore.

The ATLAS Software Installation System v2 Alessandro De Salvo Mayuko Kataoka, Arturo Sanchez Pineda,Yuri Smirnov CHEP 2015

INDIGO-DataCloud Architectural Overview

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

Grid services. Enabling Grids for E-sciencE. Dusan Vudragovic Scientific Computing Laboratory Institute of Physics Belgrade, Serbia

BUILDING OF THE VIRTUAL ENVIRONMENT FOR GRID APPLICATIONS. Volodymyr Kazymyr, Dmytro Melnychenko, Olga Prila, Mykola Kryshchenko

Work Package 4, Milestone 4.2: First deployment of the consolidated platform

ARC NOX AND THE ROADMAP TO THE UNIFIED EUROPEAN MIDDLEWARE

SZDG, ecom4com technology, EDGeS-EDGI in large P. Kacsuk MTA SZTAKI

Evolution of the ATLAS PanDA Workload Management System for Exascale Computational Science

TOOLS FOR BIOMEDICAL DATA ARCHIVING IN UKRAINIAN GRID INFRASTRUCTURE

The ARC Information System

BOSCO Architecture. Derek Weitzel University of Nebraska Lincoln

The ATLAS PanDA Pilot in Operation

Workflow applications on EGI with WS-PGRADE. Peter Kacsuk and Zoltan Farkas MTA SZTAKI

Information and monitoring

glite Grid Services Overview

Centre de Calcul de l Institut National de Physique Nucléaire et de Physique des Particules. Singularity overview. Vanessa HAMAR

Scalability / Data / Tasks

EGI-InSPIRE. ARC-CE IPv6 TESTBED. Barbara Krašovec, Jure Kranjc ARNES. EGI-InSPIRE RI

Geant4 on Azure using Docker containers

Pegasus Workflow Management System. Gideon Juve. USC Informa3on Sciences Ins3tute

Improved 3G Bridge scalability to support desktop grid executions

FUJITSU PHI Turnkey Solution

Interoperating AliEn and ARC for a distributed Tier1 in the Nordic countries.

Introduction to BioHPC New User Training

Parallel computing, data and storage

Scientific data management

Introduction to BioHPC

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

LCG-2 and glite Architecture and components

DTM a lightweight computing virtualization system based on irods. Yonny Cardenas, Pascal Calvat, Jean-Yves Nief, Thomas Kachelhoffer

ISTITUTO NAZIONALE DI FISICA NUCLEARE

Ukrainian Grid Infrastructure: Practical Experience

Introduction to BioHPC

Andrej Filipčič

HPC Enabling R&D at Philip Morris International

Workload management at KEK/CRC -- status and plan

Opportunities for container environments on Cray XC30 with GPU devices

ECE 574 Cluster Computing Lecture 1

VC3. Virtual Clusters for Community Computation. DOE NGNS PI Meeting September 27-28, 2017

Data Movement & Tiering with DMF 7

ARC-XWCH bridge: Running ARC jobs on the XtremWeb-CH volunteer

On the employment of LCG GRID middleware

A VOSpace deployment: interoperability and integration in big infrastructure

EGI federated e-infrastructure, a building block for the Open Science Commons

Hybrid KAUST Many Cores and OpenACC. Alain Clo - KAUST Research Computing Saber Feki KAUST Supercomputing Lab Florent Lebeau - CAPS

ATLAS Tier-3 UniGe

Deliverable D5.3. World-wide E-infrastructure for structural biology. Grant agreement no.: Prototype of the new VRE portal functionality

Shifter at CSCS Docker Containers for HPC

Clouds in High Energy Physics

Outline 18/12/2014. Accessing GROMACS on a Science Gateway. GROMACS in a nutshell. GROMACS users in India. GROMACS on GARUDA

30 Nov Dec Advanced School in High Performance and GRID Computing Concepts and Applications, ICTP, Trieste, Italy

GRID COMPANION GUIDE

REsources linkage for E-scIence - RENKEI -

Some thoughts on the evolution of Grid and Cloud computing

An Introduction to Virtualization and Cloud Technologies to Support Grid Computing

The rcuda middleware and applications

Grid Scheduling Architectures with Globus

Rule-Based Automatic Management of a Distributed Simulation Environment

GRID Application Portal

STATUS OF PLANS TO USE CONTAINERS IN THE WORLDWIDE LHC COMPUTING GRID

Kuberiter White Paper. Kubernetes. Cloud Provider Comparison Chart. Lawrence Manickam Kuberiter Inc

Installation of CMSSW in the Grid DESY Computing Seminar May 17th, 2010 Wolf Behrenhoff, Christoph Wissing

g-eclipse A Framework for Accessing Grid Infrastructures Nicholas Loulloudes Trainer, University of Cyprus (loulloudes.n_at_cs.ucy.ac.

Introduction to Grid Infrastructures

Lessons Learned in the NorduGrid Federation

WMS overview and Proposal for Job Status

Enabling Multi-Cloud with Istio Stretching an Istio service mesh between Public & Private Clouds. John Joyce Robert Li

Grid and Cloud Activities in KISTI

GoDocker. A batch scheduling system with Docker containers

SOMA2 Gateway to Grid Enabled Molecular Modelling Workflows in WWW-Browser. EGI User Forum Dr. Tapani Kinnunen

Beyond 1001 Dedicated Data Service Instances

Deliverable D5.5. D5.5 VRE-integrated PDBe Search and Query API. World-wide E-infrastructure for structural biology. Grant agreement no.

Bright Cluster Manager

ARC integration for CMS

Regional e-infrastructures

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

The University of Oxford campus grid, expansion and integrating new partners. Dr. David Wallom Technical Manager

An integrated IaaS and PaaS architecture for scientific computing

Users and utilization of CERIT-SC infrastructure

Introduction to Programming and Computing for Scientists

INTEGRATION OF CLOUD COMPUTING PLATFORM TO GRID INFRASTRUCTURE

Parallel Computing in EGI

Rapid Deployment of VS Workflows. Meta Scheduling Service

AMGA metadata catalogue system

High Performance Computing Course Notes Grid Computing I

A Virtual Comet. HTCondor Week 2017 May Edgar Fajardo On behalf of OSG Software and Technology

Prototype DIRAC portal for EISCAT data Short instruction

Sharing High-Performance Devices Across Multiple Virtual Machines

Computational Biology Software Overview

DIRAC pilot framework and the DIRAC Workload Management System

GPU Clouds IGT Cloud Computing Summit Mordechai Butrashvily, CEO 2009 (c) All rights reserved

BOSCO Architecture. Derek Weitzel University of Nebraska Lincoln

Build Scientific Computing Infrastructure with Rebar3 and Docker. Eric Sage

WLCG Lightweight Sites

A High Availability Solution for GRID Services

Transcription:

Molecular dynamics simulations in the MolDynGrid Virtual Laboratory by means of ARC between Grid and Cloud Andrii Salnikov * NorduGrid 2016 * manf@grid.org.ua

MolDynGrid Virtual Laboratory Has been established in 2008 for interdisciplinary studies in computational structural biology and bioinformatics, especially for molecular dynamics (MD) simulations of biological macromolecules and their complexes. TyrRNA Synthetase Trajectory length: 20 ns (4fs step) Size: 18GB Computing time: 7 days @32 Gflops 2 Andrii Salnikov NorduGrid 2016

MD Simulation Needs Resources to compute and store massive set of trajectories for further analyses End-users are biologists that don t have an experience with HPC and Grid computing 3 Andrii Salnikov NorduGrid 2016

MolDynGrid Current Infrastructure Own production ARC CE (UA-IMBG) for computations and development Other most powerful CEs in Ukraine for production computing 80TB(own)+20TB grid-storage MD trajectories database Jobs data stage-in and stage-out MolDynSub CLI for jobs submission Web Portal for job submission and monitoring Web Portal MolDynSub CLI Grid Storage 4 Andrii Salnikov NorduGrid 2016

MolDynGrid Web Portal The first step towards the automation of MD computations in GROMACS for MolDynGrid objects back to 2009 Submission via old ARC CLI Job status pulled from ARC Infosys Monitoring interface Trajectories Database with links to computation results Available at https://moldyngrid.org 5 Andrii Salnikov NorduGrid 2016

MolDynSub CLI Version <= 2.1 Legacy ARC submission only GROMACS MD oriented Old portal backend Version 3.x New modular design Different software modules Different infrastructures modules (Legacy ARC, ARC EMI-ES, Rainbow support) Different portal backends new MolDynGrid portal backend with REST push notifications old backend for backward compatibility 6 Andrii Salnikov NorduGrid 2016

Different software needs for MD analyses Several versions of computational software Including additional force-field builds Additional analysis tools Quantum chemistry modules All versions need to be compiled for each of the heterogeneous resources that run different operating systems 7 Andrii Salnikov NorduGrid 2016

Rainbow ( ARC in the Cloud ) Running virtual machines as a grid jobs to dynamically create interactive access platforms to run software in pre-created environment ARC CE Cloud/VM RTE Rainbow Nethelper Gateway Rainbow WN Essentials VMJob Grid Network Storage Infrastructure Runs on the top of ARC Stage VM images and data from the Grid Grid Users SE SE SE CE CE Rainbow at a Glance: http://goo.gl/ifc6cl 8 Andrii Salnikov NorduGrid 2016

Environment Environment MD Simulations in the MolDynGrid VL Rainbow JobWrap Mode Transparent VM usage for computational jobs with minimal job description modifications specify VM image location and request to invoke Rainbow JobWrap Physical WN Data Physical WN Environment Software Data Rainbow WN Components VM JOB Data Software JOB rainbow-vm-jobwrap 9 Andrii Salnikov NorduGrid 2016

Environment mount pass MD Simulations in the MolDynGrid VL BYOWN Bring Your Own Worker Node Docker Containers create own WN environment that runs under Rainbow automated image maintaining framework near zero overhead Data Physical WN Environment Data JOB submit LRMS job ARC CE Rainbow WN Docker Components Rainbow CE Components Own WN JOB Data Software Run inside Docker image VO Registry 10 Andrii Salnikov NorduGrid 2016

Rainbow VM FedCloud MD Simulations in the MolDynGrid VL EGI GPGPU FedCloud Resources MD simulations in GPU-accelerated NAMD Management Web-UI VM with ARC clients inside to fetch data Job Factory DB JSON RESTful API ARC CE Gloria Pilot Service Rainbow Pilot Mode MolDynSub CLI submission 11 Andrii Salnikov NorduGrid 2016

Summary MolDynGrid Infrastructure rely on ARC components to compute jobs in both Grid and Cloud Using flexibility of ARC we have managed to use existing grid infrastructure as a hardware provider for on-demand virtual machines with interactive access automate software maintainability tasks in the heterogeneous grid environment stage data to the Cloud from the grid storage network With those development on the top of ARC and ARC-powered grid infrastructure MolDynGrid solves valuable scientific problems* Methods and tools used are applicable to other research communities needs** *Savytskyi OV, Yesylevskyy SO, Kornelyuk AI. Asymmetric structure and domain binding interfaces of human tyrosyltrna synthetase studied by molecular dynamics simulations. J Mol Recognit. 2013;26(2):113 20. **If you are interested in running VMs in grid contact us: grid@grid.org.ua 12 Andrii Salnikov NorduGrid 2016

Thank you for kind attention! mailto: manf@grid.org.ua 13 Andrii Salnikov NorduGrid 2016