Hands-On Workshop bwunicluster June 29th 2015

Size: px
Start display at page:

Download "Hands-On Workshop bwunicluster June 29th 2015"

Transcription

1 Hands-On Workshop bwunicluster June 29th 2015 Agenda Welcome Introduction to bwhpc and the bwunicluster Modules - Software Environment Management Job Submission and Monitoring Interactive Work and Remote Visualisation Questions and Answers, Open Discussion End 1

2 High performance computing in Baden-Württemberg An introduction to bwhpc and the bwunicluster Jürgen Salk (bwhpc-c5) 2

3 1. bwhpc concept 3

4 bwhpc: Where do we come from? bwhpc is the successor of bwgrid bwgrid: Clusters located at 9 universities in BW Homogeneous resources Common hardware Feel at home on all 9 bwgrid sites One-size-fits-all approach One-size-fits-all: describes a piece of clothing that is designed to fit a person of any size. Source: 4

5 bwhpc: Where do we come from? bwhpc is the successor of bwgrid bwgrid: Clusters located at 9 universities in BW Homogeneous resources Common hardware Feel at home on all 9 bwgrid sites One-size-fits-all approach One-size-fits-all: describes a piece of clothing that is designed to fit a person of any size. Source: 5

6 bwhpc Strategy for high perfomance computing in BW from 2013 to 2018, in particular for Tier 3 Provision of computing systems tailored to the needs of specific scientific communities Economics & social science, General sciences supply Molecular life science Bioinformatics Mannheim Heidelberg Karlsruhe Neurosciences Astrophysics Tübingen Ulm Freiburg Micro systems engineering 6 Elementary particle physics Computational chemistry

7 bwhpc Strategy for high perfomance computing in BW from 2013 to 2018, in particular for Tier 3 Provision of computing systems tailored to the needs of specific scientific communities Economics & social science, General sciences supply Molecular life science Bioinformatics Mannheim Heidelberg Karlsruhe Neurosciences Astrophysics Tübingen Ulm Freiburg Micro systems engineering Elementary particle physics Computational chemistry JUSTUS 7

8 bwhpc Strategy for high perfomance computing in BW from 2013 to 2018, in particular for Tier 3 Provision of computing systems tailored to the needs of specific scientific communities Economics & social science, General sciences supply Molecular life science Bioinformatics Mannheim Heidelberg bwunicluster Karlsruhe Neurosciences Astrophysics Tübingen Ulm Freiburg Micro systems engineering Elementary particle physics Computational chemistry JUSTUS 8

9 2. Introduction to the bwunicluster 9

10 bwunicluster Physically located at KIT in Karlsruhe Co-financed by Baden-Württemberg's ministry of science, research and arts and the shareholders: 10 40,25 10 Usage: 10 7,5 2,25 Stuttgart Freiburg Ulm Hohenheim Konstanz Heidelberg Tübingen Mannheim KIT Free of charge General purpose, teaching Technical computing (sequential & weak parallel) & parallel computing Access / limitations: For all members of shareholder's university, but user needs to be entitled by home university Registration at Participate questionaire at Filsesystem quota and computation share based on own university's share 10

11 bwunicluster hardware architecture 2 x Login Nodes Nodes that are directly accessible by end users. interactive login, file management, program development and interactive pre- and postprocessing. 520 x Compute Nodes 512 thin nodes: 16-way (2x8) Intel Xeon E5-2670, clock speed 2.6 GHz 64 GB RAM 2 TB local disk space 8 fat nodes: 32-way (4x8) Intel Xeon E5-4640, clock speed 2.4 GHz 1 TB RAM 7 TB local disk space fast interconnect Infiniband 4 x FDR (4 x 14 Gbit/s) Access is managed by a batch system Jobs are submitted via MOAB Job is executed depending on its priority, when required resources are available. 11

12 bwunicluster hardware architecture 2 TB 4 TB 2 TB 4 TB 2 TB 2 TB 7 TB 8x 8x $HOME 469 TB 7 TB 8x 8x $WORK / workspaces 938 TB Global shared storage by parallel files system Lustre 12

13 bwunicluster HOME file system Any user will be automatically placed into $HOME upon login Environment variable: $HOME (e.g.: /home/ul/ul_theophys/ul_<username>) Intended to keep important permanent user's files only, e.g. program source codes, final result files, personal configuration files, Daily backups Group quotas for disk space and number of files (no quota for individual users) How to check quota and disk usage: $ cat $HOME/../diskusage For users from Ulm group quota is regulary adjusted to reflect group size Aggregated read/write performance is low (~8 GB/s) DO NOT COMPUTE IN $HOME! 13

14 bwunicluster work file systems Aggregated read/write performance is much better than for $HOME (~16 GB/s) Intended for parallel access (shared across multiple nodes) and for high throughput to large files, e.g. temporary job files, intermediate result files (checkpoint files), No backups!!! Limited lifetime of files!!! 2 different concepts to access work file system: (a) via $WORK environment variable (b) via Workspace tools 14

15 bwunicluster work file systems (a) $WORK Automatically created for any user upon first login Environment variable: $WORK (e.g.: /work/ul/ul_theophys/ul_<username>) Change to it: $ cd $WORK Limited lifetime: Any file in $WORK not accessed by more than 28 days will be automatically deleted. Maximum lifetime of a file is 280 days. Files no longer needed should be removed by the user Group quotas for disk space and number of files may be introduced if required How to check quota and disk usage: $ cat $WORK/../diskusage 15

16 bwunicluster work file systems (b) Workspace tools (highly recommended) Advantage: Provides more control over lifetime and location of files Create a workspace folder named Simulation with a lifetime of 30 days (max. 60 days) from now: $ ws_allocate Simulation 30 List your workspaces with location, creation date and remaining lifetime: $ ws_list Extend lifetime of existing workspace (up to 3x): $ ws_extend Simulation 60 Find location of workspace folder by it's name: $ ws_find Simulation Release (delete!) workspace. (Remember: There is no backup): $ ws_release Simulation Example usage: $ ws_allocate Simulation 30 $ SIMWS=`ws_find Simulation` $ ln s $SIMWS $HOME 16

17 bwunicluster local file systems Higher aggregated read/write performance than global file systems Temporary subdirectory automatically created for every individual job on the compute node Environment variable: $TMP (e.g.: /scratch/slurm_tmpdir/job_<jobnumber>) Intended for single node jobs with massive IO demands. Data stored in $TMP will be deleted at the end of the job. Copy important results to $HOME or $WORK or an allocated workspace at end of job No backup!!! Example usage (somewhat simplified): cp $HOME/inputfile $TMP cd $TMP program <inputfile >outfile cp outfile $HOME 17

18 bwunicluster file systems at a glance Property $TMP $HOME $WORK / workspace Visibility local global global Lifetime batch job runtime permanent max. 240 days Disk space 2 thin nodes 7 fat nodes 469 TB 938 TB Quotas no yes if required Backup no yes no low high Aggr. read/write Very high performance 18

19 Documentation and Support Website: General info: in English and German Best-practices-guide (documentation on clusters): in English User Support: Send to: Ticket-system: 19

20 Thank you for your attention! Questions? 20

21 3. Get ready to start 21

22 Prerequisites Register at bwunicluster and/or check your registration status in a web browser: What's your localuid? Optionally set a reasonably strong password for the bwunicluster Check your status and/or participate the questionaire in the web browser at On your local desktop open a terminal window in KDE: press <ALT>+<F2>, type konsole, press <Enter> Log into the bwunicluster: At the local desktop's terminal command prompt type: ssh X <UserID>@bwunicluster.scc.kit.edu 22

bwfortreff bwhpc user meeting

bwfortreff bwhpc user meeting bwfortreff bwhpc user meeting bwhpc Competence Center MLS&WISO Universitätsrechenzentrum Heidelberg Rechenzentrum der Universität Mannheim Steinbuch Centre for Computing (SCC) Funding: www.bwhpc-c5.de

More information

Operating two InfiniBand grid clusters over 28 km distance

Operating two InfiniBand grid clusters over 28 km distance Operating two InfiniBand grid clusters over 28 km distance Sabine Richling, Steffen Hau, Heinz Kredel, Hans-Günther Kruse IT-Center University of Heidelberg, Germany IT-Center University of Mannheim, Germany

More information

Access: bwunicluster, bwforcluster, ForHLR

Access: bwunicluster, bwforcluster, ForHLR Access: bwunicluster, bwforcluster, ForHLR Shamna Shamsudeen, SCC, KIT Steinbuch Centre for Computing (SCC) Funding: www.bwhpc-c5.de Outline Introduction Registration Processes bwunicluster bwforcluster

More information

Performance Analysis and Prediction for distributed homogeneous Clusters

Performance Analysis and Prediction for distributed homogeneous Clusters Performance Analysis and Prediction for distributed homogeneous Clusters Heinz Kredel, Hans-Günther Kruse, Sabine Richling, Erich Strohmaier IT-Center, University of Mannheim, Germany IT-Center, University

More information

Implementierung eines Dynamic Remote Storage Systems (DRS) für Applikationen mit hohen IO Anforderungen

Implementierung eines Dynamic Remote Storage Systems (DRS) für Applikationen mit hohen IO Anforderungen Implementierung eines Dynamic Remote Storage Systems (DRS) für Applikationen mit hohen IO Anforderungen Jürgen Salk, Christian Mosch, Matthias Neuer, Karsten Siegmund, Volodymyr Kushnarenko, Stefan Kombrink,

More information

Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO

Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO Virtualization of the ATLAS Tier-2/3 environment on the HPC cluster NEMO Ulrike Schnoor (CERN) Anton Gamel, Felix Bührer, Benjamin Rottler, Markus Schumacher (University of Freiburg) February 02, 2018

More information

Using file systems at HC3

Using file systems at HC3 Using file systems at HC3 Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu Basic Lustre

More information

bwgrid Treff am URZ Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29.

bwgrid Treff am URZ Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. bwgrid Treff am URZ Sabine Richling, Heinz Kredel Universitätsrechenzentrum Heidelberg Rechenzentrum Universität Mannheim 29. April 2010 Richling/Kredel (URZ/RUM) bwgrid Treff SS 2010 1 / 49 Course Organization

More information

A Long-distance InfiniBand Interconnection between two Clusters in Production Use

A Long-distance InfiniBand Interconnection between two Clusters in Production Use A Long-distance InfiniBand Interconnection between two Clusters in Production Use Sabine Richling, Steffen Hau, Heinz Kredel, Hans-Günther Kruse IT-Center, University of Heidelberg, Germany IT-Center,

More information

Now SAML takes it all:

Now SAML takes it all: Now SAML takes it all: Federation of non Web-based Services in the State of Baden-Württemberg Sebastian Labitzke Karlsruhe Institute of Technology (KIT) Steinbuch Centre for Computing (SCC) labitzke@kit.edu

More information

Assistance in Lustre administration

Assistance in Lustre administration Assistance in Lustre administration Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association www.kit.edu

More information

Extraordinary HPC file system solutions at KIT

Extraordinary HPC file system solutions at KIT Extraordinary HPC file system solutions at KIT Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State Roland of Baden-Württemberg Laifer Lustre and tools for ldiskfs investigation

More information

bwfdm Communities - a Research Data Management Initiative in the State of Baden-Wuerttemberg

bwfdm Communities - a Research Data Management Initiative in the State of Baden-Wuerttemberg bwfdm Communities - a Research Data Management Initiative in the State of Baden-Wuerttemberg Karlheinz Pappenberger Tromsø, 9th Munin Conference on Scholarly Publishing, 27/11/2014 Overview 1) Federalism

More information

Filesystems on SSCK's HP XC6000

Filesystems on SSCK's HP XC6000 Filesystems on SSCK's HP XC6000 Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Overview» Overview of HP SFS at SSCK HP StorageWorks Scalable File Share (SFS) based on

More information

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU

Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Ivane Javakhishvili Tbilisi State University High Energy Physics Institute HEPI TSU Grid cluster at the Institute of High Energy Physics of TSU Authors: Arnold Shakhbatyan Prof. Zurab Modebadze Co-authors:

More information

Introduction to High Performance Computing Using Sapelo2 at GACRC

Introduction to High Performance Computing Using Sapelo2 at GACRC Introduction to High Performance Computing Using Sapelo2 at GACRC Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu 1 Outline High Performance Computing (HPC)

More information

Lessons learned from Lustre file system operation

Lessons learned from Lustre file system operation Lessons learned from Lustre file system operation Roland Laifer STEINBUCH CENTRE FOR COMPUTING - SCC KIT University of the State of Baden-Württemberg and National Laboratory of the Helmholtz Association

More information

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR

More information

Grid Computing Activities at KIT

Grid Computing Activities at KIT Grid Computing Activities at KIT Meeting between NCP and KIT, 21.09.2015 Manuel Giffels Karlsruhe Institute of Technology Institute of Experimental Nuclear Physics & Steinbuch Center for Computing Courtesy

More information

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs

bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs bwunicluster Tutorial Access, Data Transfer, Compiling, Modulefiles, Batch Jobs Frauke Bösert, SCC, KIT 1 Material: Slides & Scripts https://indico.scc.kit.edu/indico/event/263/ @bwunicluster/forhlr I/ForHLR

More information

Umeå University

Umeå University HPC2N: Introduction to HPC2N and Kebnekaise, 2017-09-12 HPC2N @ Umeå University Introduction to HPC2N and Kebnekaise Jerry Eriksson, Pedro Ojeda-May, and Birgitte Brydsö Outline Short presentation of HPC2N

More information

Umeå University

Umeå University HPC2N @ Umeå University Introduction to HPC2N and Kebnekaise Jerry Eriksson, Pedro Ojeda-May, and Birgitte Brydsö Outline Short presentation of HPC2N HPC at a glance. HPC2N Abisko, Kebnekaise HPC Programming

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 13 th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System x idataplex CINECA, Italy Lenovo System

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI for Physical Scientists Michael Milligan MSI Scientific Computing Consultant Goals Introduction to MSI resources Show you how to access our systems

More information

OBTAINING AN ACCOUNT:

OBTAINING AN ACCOUNT: HPC Usage Policies The IIA High Performance Computing (HPC) System is managed by the Computer Management Committee. The User Policies here were developed by the Committee. The user policies below aim to

More information

Frequently Asked Questions

Frequently Asked Questions Frequently Asked Questions Fabien Archambault Aix-Marseille Université 2012 F. Archambault (AMU) Rheticus: F.A.Q. 2012 1 / 13 1 Rheticus configuration 2 Front-end connection 3 Modules 4 OAR submission

More information

Illinois Proposal Considerations Greg Bauer

Illinois Proposal Considerations Greg Bauer - 2016 Greg Bauer Support model Blue Waters provides traditional Partner Consulting as part of its User Services. Standard service requests for assistance with porting, debugging, allocation issues, and

More information

3M Molecular Detection System Software Upgrade/Installation Instructions

3M Molecular Detection System Software Upgrade/Installation Instructions User Manual Supplement Number: TB.342837.03 Effective Date: March 2018 Supersedes: TB.342837.02 Technology Platform: 3M Molecular Detection System Originating Location: St. Paul, MN 3M Molecular Detection

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is HPC Concept? What is

More information

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC

ARCHER/RDF Overview. How do they fit together? Andy Turner, EPCC ARCHER/RDF Overview How do they fit together? Andy Turner, EPCC a.turner@epcc.ed.ac.uk www.epcc.ed.ac.uk www.archer.ac.uk Outline ARCHER/RDF Layout Available file systems Compute resources ARCHER Compute

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

High Performance Computing (HPC) Using zcluster at GACRC

High Performance Computing (HPC) Using zcluster at GACRC High Performance Computing (HPC) Using zcluster at GACRC On-class STAT8060 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC?

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 6 th CALL (Tier-0) Contributing sites and the corresponding computer systems for this call are: GCS@Jülich, Germany IBM Blue Gene/Q GENCI@CEA, France Bull Bullx

More information

High Performance Computing Resources at MSU

High Performance Computing Resources at MSU MICHIGAN STATE UNIVERSITY High Performance Computing Resources at MSU Last Update: August 15, 2017 Institute for Cyber-Enabled Research Misson icer is MSU s central research computing facility. The unit

More information

UAntwerpen, 24 June 2016

UAntwerpen, 24 June 2016 Tier-1b Info Session UAntwerpen, 24 June 2016 VSC HPC environment Tier - 0 47 PF Tier -1 623 TF Tier -2 510 Tf 16,240 CPU cores 128/256 GB memory/node IB EDR interconnect Tier -3 HOPPER/TURING STEVIN THINKING/CEREBRO

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) PRACE 16th Call Technical Guidelines for Applicants V1: published on 26/09/17 TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 16 th CALL (T ier-0) The contributing sites and the corresponding computer systems

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Slurm at the George Washington University Tim Wickberg - Slurm User Group Meeting 2015

Slurm at the George Washington University Tim Wickberg - Slurm User Group Meeting 2015 Slurm at the George Washington University Tim Wickberg - wickberg@gwu.edu Slurm User Group Meeting 2015 September 16, 2015 Colonial One What s new? Only major change was switch to FairTree Thanks to BYU

More information

Practice of Software Development: Dynamic scheduler for scientific simulations

Practice of Software Development: Dynamic scheduler for scientific simulations Practice of Software Development: Dynamic scheduler for scientific simulations @ SimLab EA Teilchen STEINBUCH CENTRE FOR COMPUTING - SCC KIT Universität des Landes Baden-Württemberg und nationales Forschungszentrum

More information

Migrating from Zcluster to Sapelo

Migrating from Zcluster to Sapelo GACRC User Quick Guide: Migrating from Zcluster to Sapelo The GACRC Staff Version 1.0 8/4/17 1 Discussion Points I. Request Sapelo User Account II. III. IV. Systems Transfer Files Configure Software Environment

More information

The SHARED hosting plan is designed to meet the advanced hosting needs of businesses who are not yet ready to move on to a server solution.

The SHARED hosting plan is designed to meet the advanced hosting needs of businesses who are not yet ready to move on to a server solution. SHARED HOSTING @ RS.2000/- PER YEAR ( SSH ACCESS, MODSECURITY FIREWALL, DAILY BACKUPS, MEMCHACACHED, REDIS, VARNISH, NODE.JS, REMOTE MYSQL ACCESS, GEO IP LOCATION TOOL 5GB FREE VPN TRAFFIC,, 24/7/365 SUPPORT

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit cores : individual processing units within a CPU Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Introduction to HPC Using zcluster at GACRC On-Class GENE 4220 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 OVERVIEW GACRC

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class PBIO/BINF8350 Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What

More information

Parallel Programming Techniques. Intro to PSC Tom Maiden

Parallel Programming Techniques. Intro to PSC Tom Maiden Parallel Programming Techniques Intro to PSC Tom Maiden tmaiden@psc.edu What is the PSC? PSC is a joint effort between: Carnegie Mellon University University of Pittsburgh Westinghouse Electric Company

More information

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0)

TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) TECHNICAL GUIDELINES FOR APPLICANTS TO PRACE 11th CALL (T ier-0) Contributing sites and the corresponding computer systems for this call are: BSC, Spain IBM System X idataplex CINECA, Italy The site selection

More information

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions How to run applications on Aziz supercomputer Mohammad Rafi System Administrator Fujitsu Technology Solutions Agenda Overview Compute Nodes Storage Infrastructure Servers Cluster Stack Environment Modules

More information

Using the IBM Opteron 1350 at OSC. October 19-20, 2010

Using the IBM Opteron 1350 at OSC. October 19-20, 2010 Using the IBM Opteron 1350 at OSC October 19-20, 2010 Table of Contents Hardware Overview The Linux Operating System User Environment and Storage 2 Hardware Overview Hardware introduction Login node configuration

More information

Should you encounter any issues or have questions as you go through this registration process, please send an to:

Should you encounter any issues or have questions as you go through this registration process, please send an  to: User Registration In order to use EFIS 2.0, a one-time registration process is required. This document outlines the steps required to register your user account and access EFIS 2.0: This registration process

More information

Windows-HPC Environment at RWTH Aachen University

Windows-HPC Environment at RWTH Aachen University Windows-HPC Environment at RWTH Aachen University Christian Terboven, Samuel Sarholz {terboven, sarholz}@rz.rwth-aachen.de Center for Computing and Communication RWTH Aachen University PPCES 2009 March

More information

PRACE Project Access Technical Guidelines - 19 th Call for Proposals

PRACE Project Access Technical Guidelines - 19 th Call for Proposals PRACE Project Access Technical Guidelines - 19 th Call for Proposals Peer-Review Office Version 5 06/03/2019 The contributing sites and the corresponding computer systems for this call are: System Architecture

More information

Users and utilization of CERIT-SC infrastructure

Users and utilization of CERIT-SC infrastructure Users and utilization of CERIT-SC infrastructure Equipment CERIT-SC is an integral part of the national e-infrastructure operated by CESNET, and it leverages many of its services (e.g. management of user

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu The Discovery Cluster 2 Agenda What is a cluster and why use it Overview of computer hardware in cluster Help Available to Discovery Users Logging

More information

How to Use a Supercomputer - A Boot Camp

How to Use a Supercomputer - A Boot Camp How to Use a Supercomputer - A Boot Camp Shelley Knuth Peter Ruprecht shelley.knuth@colorado.edu peter.ruprecht@colorado.edu www.rc.colorado.edu Outline Today we will discuss: Who Research Computing is

More information

Outline. March 5, 2012 CIRMMT - McGill University 2

Outline. March 5, 2012 CIRMMT - McGill University 2 Outline CLUMEQ, Calcul Quebec and Compute Canada Research Support Objectives and Focal Points CLUMEQ Site at McGill ETS Key Specifications and Status CLUMEQ HPC Support Staff at McGill Getting Started

More information

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop

Before We Start. Sign in hpcxx account slips Windows Users: Download PuTTY. Google PuTTY First result Save putty.exe to Desktop Before We Start Sign in hpcxx account slips Windows Users: Download PuTTY Google PuTTY First result Save putty.exe to Desktop Research Computing at Virginia Tech Advanced Research Computing Compute Resources

More information

Oakland University Obtaining Your 1098-T Electronically

Oakland University Obtaining Your 1098-T Electronically Accessing a student 1098-T is easy - simply go to tra.vangent.com, click on First Time Student and follow the instructions. 1. Open a web browser (such as Internet Explorer, Safari, Chrome, Firefox, etc.

More information

Introduction to HPC Using zcluster at GACRC

Introduction to HPC Using zcluster at GACRC Introduction to HPC Using zcluster at GACRC On-class STAT8330 Georgia Advanced Computing Resource Center University of Georgia Suchitra Pakala pakala@uga.edu Slides courtesy: Zhoufei Hou 1 Outline What

More information

Citrix Synchronizer Quick Start Guide

Citrix Synchronizer Quick Start Guide Citrix Synchronizer Quick Start Guide Version 5.9.1 November 2017 About Citrix Synchronizer Synchronizer is the server used to deliver Virtual Machines (VMs) to DesktopPlayer clients. It manages: Users

More information

Introduction to High Performance Computing (HPC) Resources at GACRC

Introduction to High Performance Computing (HPC) Resources at GACRC Introduction to High Performance Computing (HPC) Resources at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline GACRC? High Performance

More information

Remote & Collaborative Visualization. Texas Advanced Computing Center

Remote & Collaborative Visualization. Texas Advanced Computing Center Remote & Collaborative Visualization Texas Advanced Computing Center TACC Remote Visualization Systems Longhorn NSF XD Dell Visualization Cluster 256 nodes, each 8 cores, 48 GB (or 144 GB) memory, 2 NVIDIA

More information

Deep Learning on SHARCNET:

Deep Learning on SHARCNET: Deep Learning on SHARCNET: Best Practices Fei Mao Outlines What does SHARCNET have? - Hardware/software resources now and future How to run a job? - A torch7 example How to train in parallel: - A Theano-based

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to MSI Systems Andrew Gustafson The Machines at MSI Machine Type: Cluster Source: http://en.wikipedia.org/wiki/cluster_%28computing%29 Machine Type: Cluster

More information

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group The cluster system Introduction 22th February 2018 Jan Saalbach Scientific Computing Group cluster-help@luis.uni-hannover.de Contents 1 General information about the compute cluster 2 Available computing

More information

LBRN - HPC systems : CCT, LSU

LBRN - HPC systems : CCT, LSU LBRN - HPC systems : CCT, LSU HPC systems @ CCT & LSU LSU HPC Philip SuperMike-II SuperMIC LONI HPC Eric Qeenbee2 CCT HPC Delta LSU HPC Philip 3 Compute 32 Compute Two 2.93 GHz Quad Core Nehalem Xeon 64-bit

More information

Citrix Synchronizer Quick Start Guide

Citrix Synchronizer Quick Start Guide Citrix Synchronizer Quick Start Guide Version 5.8 August 2017 About Citrix Synchronizer Synchronizer is the server used to deliver Virtual Machines (VMs) to DesktopPlayer clients. It manages: Users (which

More information

Using Sapelo2 Cluster at the GACRC

Using Sapelo2 Cluster at the GACRC Using Sapelo2 Cluster at the GACRC New User Training Workshop Georgia Advanced Computing Resource Center (GACRC) EITS/University of Georgia Zhuofei Hou zhuofei@uga.edu 1 Outline GACRC Sapelo2 Cluster Diagram

More information

SuperMike-II Launch Workshop. System Overview and Allocations

SuperMike-II Launch Workshop. System Overview and Allocations : System Overview and Allocations Dr Jim Lupo CCT Computational Enablement jalupo@cct.lsu.edu SuperMike-II: Serious Heterogeneous Computing Power System Hardware SuperMike provides 442 nodes, 221TB of

More information

Introduction to High-Performance Computing (HPC)

Introduction to High-Performance Computing (HPC) Introduction to High-Performance Computing (HPC) Computer components CPU : Central Processing Unit CPU cores : individual processing units within a Storage : Disk drives HDD : Hard Disk Drive SSD : Solid

More information

New Rock Technologies, Inc. Unified Management System. User Guide. Document Version:

New Rock Technologies, Inc. Unified Management System. User Guide.   Document Version: New Rock Technologies, Inc. Unified Management System User Guide http://www.newrocktech.com Document Version: 201807 Amendment Records Document Rev. 01 (June, 2018) Applied to UMS V1.1.2. Copyright 2018

More information

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez

Scientific data processing at global scale The LHC Computing Grid. fabio hernandez Scientific data processing at global scale The LHC Computing Grid Chengdu (China), July 5th 2011 Who I am 2 Computing science background Working in the field of computing for high-energy physics since

More information

High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman

High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman High Performance Computing (HPC) Prepared By: Abdussamad Muntahi Muhammad Rahman 1 2 Introduction to High Performance Computing (HPC) Introduction High-speed computing. Originally pertaining only to supercomputers

More information

NERSC. National Energy Research Scientific Computing Center

NERSC. National Energy Research Scientific Computing Center NERSC National Energy Research Scientific Computing Center Established 1974, first unclassified supercomputer center Original mission: to enable computational science as a complement to magnetically controlled

More information

Batch system usage arm euthen F azo he Z J. B T

Batch system usage arm euthen F azo he Z J. B T Batch system usage 10.11.2010 General stuff Computing wikipage: http://dvinfo.ifh.de Central email address for questions & requests: uco-zn@desy.de Data storage: AFS ( /afs/ifh.de/group/amanda/scratch/

More information

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK

RHRK-Seminar. High Performance Computing with the Cluster Elwetritsch - II. Course instructor : Dr. Josef Schüle, RHRK RHRK-Seminar High Performance Computing with the Cluster Elwetritsch - II Course instructor : Dr. Josef Schüle, RHRK Overview Course I Login to cluster SSH RDP / NX Desktop Environments GNOME (default)

More information

Introduction to HPC Using the New Cluster at GACRC

Introduction to HPC Using the New Cluster at GACRC Introduction to HPC Using the New Cluster at GACRC Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? What is the new cluster

More information

Rechenzentrum HIGH PERFORMANCE SCIENTIFIC COMPUTING

Rechenzentrum HIGH PERFORMANCE SCIENTIFIC COMPUTING Rechenzentrum HIGH PERFORMANCE SCIENTIFIC COMPUTING Contents Scientifi c Supercomputing Center Karlsruhe (SSCK)... 4 Consultation and Support... 5 HP XC 6000 Cluster at the SSC Karlsruhe... 6 Architecture

More information

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers CHEP 2016 - San Francisco, United States of America Gunther Erli, Frank Fischer, Georg Fleig, Manuel Giffels, Thomas

More information

Introduction to BioHPC New User Training

Introduction to BioHPC New User Training Introduction to BioHPC New User Training [web] [email] portal.biohpc.swmed.edu biohpc-help@utsouthwestern.edu 1 Updated for 2018-04-04 Overview Today we re going to cover: What is BioHPC? How do I access

More information

Milestone Technical Configuration Level 2 Training Workshop Agenda

Milestone Technical Configuration Level 2 Training Workshop Agenda Milestone Technical Configuration Level 2 Training Workshop Agenda Note: Each participant must bring a laptop with wireless connectability and a modern browser capable of accessing the internet. Additional

More information

Introduction to GACRC Storage Environment. Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer

Introduction to GACRC Storage Environment. Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer Introduction to GACRC Storage Environment Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu Outline What is GACRC? Overview of Linux Commands GACRC

More information

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center

The Stampede is Coming Welcome to Stampede Introductory Training. Dan Stanzione Texas Advanced Computing Center The Stampede is Coming Welcome to Stampede Introductory Training Dan Stanzione Texas Advanced Computing Center dan@tacc.utexas.edu Thanks for Coming! Stampede is an exciting new system of incredible power.

More information

Cloud Control Panel (CCP) User Guide

Cloud Control Panel (CCP) User Guide Cloud Control Panel (CCP) User Guide Version 1.0: 01.01.11 Copyright 2011 DNS Europe Ltd. All rights reserved. Cloud Control Panel (CCP) User Guide v1.0 Table of Contents 1 Introduction 3 1.1 Intended

More information

Introduction to the Cluster

Introduction to the Cluster Introduction to the Cluster Advanced Computing Center for Research and Education http://www.accre.vanderbilt.edu Follow us on Twitter for important news and updates: @ACCREVandy The Cluster We will be

More information

Introduction to GACRC Storage Environment. Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer

Introduction to GACRC Storage Environment. Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer Introduction to GACRC Storage Environment Georgia Advanced Computing Resource Center University of Georgia Zhuofei Hou, HPC Trainer zhuofei@uga.edu 1 Outline What is GACRC? Overview of Linux Commands GACRC

More information

Introduction to Discovery.

Introduction to Discovery. Introduction to Discovery http://discovery.dartmouth.edu March 2014 The Discovery Cluster 2 Agenda Resource overview Logging on to the cluster with ssh Transferring files to and from the cluster The Environment

More information

Remote Deposit. Getting Started Guide

Remote Deposit. Getting Started Guide Remote Deposit Getting Started Guide Table of Contents Introduction... 3 Minimum System Requirements... 3 Direct Merchant Smart Deposit Installation Instructions... 6 Multi-Factor Authentication Enrollment...

More information

IBM z Systems Development and Test Environment Tools User's Guide IBM

IBM z Systems Development and Test Environment Tools User's Guide IBM IBM z Systems Development and Test Environment Tools User's Guide IBM ii IBM z Systems Development and Test Environment Tools User's Guide Contents Chapter 1. Overview......... 1 Introduction..............

More information

Cornell Theory Center 1

Cornell Theory Center 1 Cornell Theory Center Cornell Theory Center (CTC) is a high-performance computing and interdisciplinary research center at Cornell University. Scientific and engineering research projects supported by

More information

INTRODUCTION TO THE CLUSTER

INTRODUCTION TO THE CLUSTER INTRODUCTION TO THE CLUSTER WHAT IS A CLUSTER? A computer cluster consists of a group of interconnected servers (nodes) that work together to form a single logical system. COMPUTE NODES GATEWAYS SCHEDULER

More information

Nextiva Drive The Setup Process Mobility & Storage Option

Nextiva Drive The Setup Process Mobility & Storage Option Nextiva Drive The Setup Process The Setup Process Adding Users 1. Login to your account and click on the Account icon at the top of the page (this is only visible to the administrator). 2. Click Create

More information

Feedback on BeeGFS. A Parallel File System for High Performance Computing

Feedback on BeeGFS. A Parallel File System for High Performance Computing Feedback on BeeGFS A Parallel File System for High Performance Computing Philippe Dos Santos et Georges Raseev FR 2764 Fédération de Recherche LUmière MATière December 13 2016 LOGO CNRS LOGO IO December

More information

CENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing

CENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing Overview of CHPC Martin Čuma, PhD Center for High Performance Computing m.cuma@utah.edu Spring 2014 Overview CHPC Services HPC Clusters Specialized computing resources Access and Security Batch (PBS and

More information

Experiences with HP SFS / Lustre in HPC Production

Experiences with HP SFS / Lustre in HPC Production Experiences with HP SFS / Lustre in HPC Production Computing Centre (SSCK) University of Karlsruhe Laifer@rz.uni-karlsruhe.de page 1 Outline» What is HP StorageWorks Scalable File Share (HP SFS)? A Lustre

More information

Research Collection. WebParFE A web interface for the high performance parallel finite element solver ParFE. Report. ETH Library

Research Collection. WebParFE A web interface for the high performance parallel finite element solver ParFE. Report. ETH Library Research Collection Report WebParFE A web interface for the high performance parallel finite element solver ParFE Author(s): Paranjape, Sumit; Kaufmann, Martin; Arbenz, Peter Publication Date: 2009 Permanent

More information

Files.Kennesaw.Edu. Kennesaw State University Information Technology Services. Introduces. Presented by the ITS Technology Outreach Team

Files.Kennesaw.Edu. Kennesaw State University Information Technology Services. Introduces. Presented by the ITS Technology Outreach Team Kennesaw State University Information Technology Services Introduces Files.Kennesaw.Edu Presented by the ITS Technology Outreach Team Last Updated 08/12/13 Powered by Xythos Copyright 2006, Xythos Software

More information

Product Bulletin for Supermicro CDE250 BIOS Upgrade to Version 2.1a

Product Bulletin for Supermicro CDE250 BIOS Upgrade to Version 2.1a Product Bulletin for Supermicro CDE250 BIOS Upgrade to Version 2.1a Contents Problem Statement... 1 Cisco Recommendation and Instruction... 2 Requirements... 2 Instruction on BIOS 2.1a Upgrade... 2 Preparing

More information

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved. Minnesota Supercomputing Institute Introduction to Job Submission and Scheduling Andrew Gustafson Interacting with MSI Systems Connecting to MSI SSH is the most reliable connection method Linux and Mac

More information

Graham vs legacy systems

Graham vs legacy systems New User Seminar Graham vs legacy systems This webinar only covers topics pertaining to graham. For the introduction to our legacy systems (Orca etc.), please check the following recorded webinar: SHARCNet

More information

OFFSIDER. Standalone Installation Guide. Updated: August 2018

OFFSIDER. Standalone Installation Guide. Updated: August 2018 & OFFSIDER Standalone Updated: August 2018 Contents Before you start 2 System Requirements 2 Standalone 3 Software Update Guide 10 Exporting Job/s 10 Exporting a Database 12 Upgrading an Existing Install

More information