HPC system startup manual (version 1.20)

Similar documents
Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

UoW HPC Quick Start. Information Technology Services University of Wollongong. ( Last updated on October 10, 2011)

High Performance Computing (HPC) Club Training Session. Xinsheng (Shawn) Qin

Using Cartesius and Lisa. Zheng Meyer-Zhao - Consultant Clustercomputing

A Brief Introduction to The Center for Advanced Computing

Introduction to HPC Using zcluster at GACRC

A Brief Introduction to The Center for Advanced Computing

Name Department/Research Area Have you used the Linux command line?

Computing with the Moore Cluster

ElasterStack 3.2 User Administration Guide - Advanced Zone

A Brief Introduction to The Center for Advanced Computing

Introduction to High Performance Computing Using Sapelo2 at GACRC

High Performance Computing (HPC) Using zcluster at GACRC

Introduction to HPC Resources and Linux

Getting started with the CEES Grid

Introduction to HPC Using zcluster at GACRC

CENTER FOR HIGH PERFORMANCE COMPUTING. Overview of CHPC. Martin Čuma, PhD. Center for High Performance Computing

The cluster system. Introduction 22th February Jan Saalbach Scientific Computing Group

Introduction to Discovery.

OBTAINING AN ACCOUNT:

Introduction to HPC Using zcluster at GACRC

Shark Cluster Overview

A Hands-On Tutorial: RNA Sequencing Using High-Performance Computing

New User Tutorial. OSU High Performance Computing Center

Introduction to Discovery.

Introduction to BioHPC

Linux VPN Configuration

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

Effective Use of CCV Resources

Introduction to Discovery.

Introduction to HPC Using zcluster at GACRC On-Class GENE 4220

VPN Client Configuration Guide

Using Sapelo2 Cluster at the GACRC

Intel Manycore Testing Lab (MTL) - Linux Getting Started Guide

Outline. March 5, 2012 CIRMMT - McGill University 2

How to run applications on Aziz supercomputer. Mohammad Rafi System Administrator Fujitsu Technology Solutions

Working with Databases

Using ISMLL Cluster. Tutorial Lec 5. Mohsan Jameel, Information Systems and Machine Learning Lab, University of Hildesheim

The JANUS Computing Environment

XSEDE New User Training. Ritu Arora November 14, 2014

Using the computational resources at the GACRC

Introduction to High Performance Computing (HPC) Resources at GACRC

Our new HPC-Cluster An overview

DAQ system at SACLA and future plan for SPring-8-II

Cloudera s Enterprise Data Hub on the Amazon Web Services Cloud: Quick Start Reference Deployment October 2014

Sandbox Setup Guide for HDP 2.2 and VMware

Quick Start Guide. by Burak Himmetoglu. Supercomputing Consultant. Enterprise Technology Services & Center for Scientific Computing

QuickStart Guide for Managing Computers. Version 9.32

Introduction to High Performance Computing and an Statistical Genetics Application on the Janus Supercomputer. Purpose

Shark Cluster Overview

Introduction to High Performance Computing at UEA. Chris Collins Head of Research and Specialist Computing ITCS

PARALLEL COMPUTING IN R USING WESTGRID CLUSTERS STATGEN GROUP MEETING 10/30/2017

Minnesota Supercomputing Institute Regents of the University of Minnesota. All rights reserved.

Online Backup Client User Manual

Protected Environment at CHPC

QuickStart Guide for Managing Computers. Version

Programming Environment on Ranger Cluster

Bitnami MEAN for Huawei Enterprise Cloud

Reset the Admin Password with the ExtraHop Rescue CD

Introduction to CARC. To provide high performance computing to academic researchers.

3.1 Getting Software and Certificates

USING NGC WITH GOOGLE CLOUD PLATFORM

HPC DOCUMENTATION. 3. Node Names and IP addresses:- Node details with respect to their individual IP addresses are given below:-

How to Use a Supercomputer - A Boot Camp

QuickStart Guide for Managing Computers. Version 9.73

Introduction to HPC Using the New Cluster at GACRC

Quick Start Guide for Intel FPGA Development Tools on the Nimbix Cloud

NBIC TechTrack PBS Tutorial

HPC Workshop. Nov. 9, 2018 James Coyle, PhD Dir. Of High Perf. Computing

PODShell: Simplifying HPC in the Cloud Workflow

Introduction to HPC Using zcluster at GACRC

Upgrade Tool Guide. July

Bitnami Apache Solr for Huawei Enterprise Cloud

Installation and Upgrade Guide Zend Studio 7.0

LiveNX Upgrade Guide from v5.2.0 to v5.2.1

QuickStart Guide for Managing Computers. Version

Quick Start Guide. Table of Contents

Introduction to Joker Cyber Infrastructure Architecture Team CIA.NMSU.EDU

UF Research Computing: Overview and Running STATA

Duke Compute Cluster Workshop. 3/28/2018 Tom Milledge rc.duke.edu

Agilent GeneSpring Software

Bitnami JRuby for Huawei Enterprise Cloud

Automatic Creation of a Virtual Network with VBoxManage [1]

Author A.Kishore/Sachin WinSCP

SuperMike-II Launch Workshop. System Overview and Allocations

R9.7 erwin License Server:

Introduction to GALILEO

CSE Linux VM. For Microsoft Windows. Based on opensuse Leap 42.2

STARTING THE DDT DEBUGGER ON MIO, AUN, & MC2. (Mouse over to the left to see thumbnails of all of the slides)

Server Manager User and Permissions Setup

Batch Systems. Running calculations on HPC resources

MediaWiki Tutorial. Step 1:Install MediaWiki. Download files

Using the Prime Performance Manager Web Interface

Bitnami MariaDB for Huawei Enterprise Cloud

Sophos Enterprise Console advanced startup guide

Image Sharpening. Practical Introduction to HPC Exercise. Instructions for Cirrus Tier-2 System

Introduction to BioHPC

GPU Cluster Usage Tutorial

Basic UNIX commands. HORT Lab 2 Instructor: Kranthi Varala

IMIR Reporting Services

Transcription:

HPC system startup manual (version 1.20) Document change log Issue Date Change 1 12/1/2012 New document 2 10/22/2013 Added the information of supported OS 3 10/22/2013 Changed the example 1 for data dowonload 4 3/4/2014 Added the chapter of Torque/Maui batch job system 5 3/4/2014 Changed text font Modified pages Notes 1 Overview The high performance computing (HPC) system in the SACLA facility consists of 80 compute nodes, with two 6-core processors per node, for a total of 960 cores. The theoretical peak compute performance is 13 TFLOPS. The system supports a 170TB shared storage system, managed by the Lustre file system. Nodes are interconnected with Infiniband with a 40 Gbps bandwidth. This manual introduces the startup from account creation to the launch of software installed in the SACLA HPC system. 2 Account creation After your experiment proposal accepted, a responsible researcher of the SACLA is assigned to your proposal. Contact him by e-mail and request the use of the SACLA HPC system. He will create your account and give the username and password. Note that /home directory is limited by quota to 100 GB per user. When you are going to use more storage area for a large amount of experimental data and the result data of the analysis, work directory can be used. Please talk with your contact researcher for the use of /work directory. 3 How to login SACLA HPC system The access of the SACLA HPC system is limited to be within SPring-8 local network. First of all, you have to connect your computer to the SPring-8 local network by using VPN service. After that, 1 / 5

you can login the HPC server through SSH protocol. [ step 1. Connect the SPring-8 network with VPN service. ] Access the web page: https://hpc.spring8.or.jp with a web browser. Login the web page of VPN installation service. (Enter your username and password given in Sec. 2) Download and install VPN connection software into your computer. Supported OS for VPN service is as follows. [Windows] Windows 7 Professional(64bit) Windows 7 Home Premium(64bit) Windows Vista Home Premium(32bit) [Mac] Mac OS X Lion (10.7) [Linux] Debian GNU/Linux 6.0 (x86_64) Redhat Enterprise Linux 6.2 (x86_64) [ step 2. Login SACLA HPC system. ] Type ssh -X [username]@xhpcfep. The username is given in Sec.2. Enter your password given in Sec.2. 4 Torque/Maui batch job system The queuing system on SACLA HPC system is Torque/Maui. You need three main commands to use this system: showq, qsub, and qdel. [showq] showq displays information about active, eligible and blocked jobs. [qsub] qsub submits a job. In its simple form you would run qsub myscript, where myscript is a shell script that runs your job. For those who don't want to write scripts, you can do an interactive qsub with the -I option. You can also use -X option for enabling X11 forwarding (e.g. qsub I X q smp ). [qdel] qdel job_number deletes your job. You can get information on job_number with showq. 2 / 5

5 Data handling in SACLA HPC system [ Ex. 1 Download your experimental data to your storage space in the HPC. ] This example downloads an HDF5 file with the data of run number 28930 in the tag range from 74121900 to 74121912 to the path of /home/[username]/data/test/sample.h5. Type mkdir -p ~/data/test in the terminal window. Type ssh [your_account]@xfer1 DataConvert3 -dir ~/data/test -o sample.h5 -each -mode 20 -r 28930 -starttag 74121900 -endtag 74121960 in the terminal window. [ Ex. 2 Open your experimental data. ] This example shows you can open the HDF5 file of sample.h5 generated in Ex. 1 by using software of HDFView. After procedure listed below, the 2 dimensional data is displayed as an image on the HDFView GUI. Type hdfview in the terminal window. Open a file open dialog by clicking File -> Open in the menu bar of the HDFView. Specify a filename to be /home/[username]/data/test/sample.h5 in the file open dialog. Extract the HDF5 tree displayed in the HDFView left sidebar by clicking run_[run number] -> detector_2d_[detector ID] -> tag_[tag number] Invert and right click a data object of detector_data and select Open as in the context menu Select Display As Image and click OK in the dataset selection dialog. [ Ex. 3 Data transfer to your local computer. ] This example is a way to transfer your data from the SACLA HPC to your local computer through SSH protocol. Type scp [username]@xhpcfep:/home/[username]/data/test/sample.h5 [favorite path] on your local terminal window. [ Ex. 4 Browse SACLA Database. ] The beamline information is recorded to a MySQL server. You can browse and plot the recorded information by using web service on the SACLA HPC network. The example written below is a procedure to display the trend graph of a signal on the web page. 3 / 5

Type firefox in the terminal window. Access the web page: http://xqaccdaq01.daq.xfel.cntl.local/syncdaq/xdaq.html. Register a signal (in this example, we use beam monitor 1) to the left top sub-window by clicking links xq-bl3-fetc -> xfel_bl_3_tc_bm_1_pd/charge in the left bottom sub-window. Select the registered signal of xfel_bl_3_tc_bm_1_pd/charge to be left Y-axis by inverting the signal name in list box of Left Axis in the left top sub-window. Select Time to be X-axis in the left top sub-window. Specify BeginTime and EndTime to the example time period: 2012-05-15 23:29:29, 2012-05-15 23:29:50 in the left top sub-window. Click plot button [ Ex. 5 Export data of SACLA Database to a Text file. ] You can export the beamline information registered in the MySQL server to an text file by using a command: syncdaq_get. Type syncdaq_get l200 -b"2012/05/15 23:29:29" -e"2012/05/15 23:29:50" xfel_bl_3_tc_bm_1_pd/charge > bm_data.txt in the terminal window. 6 Installed software List and how to use The available software is listed below. [ Hdfview ] Type hdfview in the terminal window. [ MATLAB] Type matlab in the terminal window. Note: The concurrent client number is restricted to be the number of license key Riken possess. You MUST close the MATLAB after finish your analysis. [ ImageJ ] Type ImageJ in the terminal window. 4 / 5

[ ROOT ] Type export ROOTSYS=/home/software/root/pro in the terminal window. Type export PATH="$PATH:$ROOTSYS/bin in the terminal window. Type export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:$ROOTSYS/lib/root" in the terminal window. Type root in the terminal window. [ FireFox ] Type firefox in the terminal window. 5 / 5