The TinyHPC Cluster. Mukarram Ahmad. Abstract

Similar documents
LAN Setup Reflection. Ask yourself some questions: o Does your VM have the correct IP? o Are you able to ping some locations, internal and external?

Root over NFS on User Mode Linux

LAN Setup Reflection

Installation Manual InfraManage.NET Installation Instructions for Ubuntu

ELE409 SPRING2018 LAB0

Booting installation of the diskless VP-100 Single Board Computer rcc1 and rcc2 at the TileCal Lab in Valencia

Different ways to use Kon-Boot

EL Serial Port Server Installation Guide Errata

Capstone PXE Server Documentation

Getting Started with PetaLinux SDK

OpenNebula 4.12 Quickstart Ubuntu and KVM

Diskless Linux Clusters

Embedded Linux Systems. Bin Li Assistant Professor Dept. of Electrical, Computer and Biomedical Engineering University of Rhode Island

John the Ripper on a Ubuntu MPI Cluster

How to Install a DHCP Server in Ubuntu and Debian

Lab #9: Configuring A Linux File Server

OpenNebula 4.4 Quickstart Ubuntu and KVM. OpenNebula Project

Life after Xserve. Part I: Recreating netboot

Installation and configuration of Linux Cluster. Addisu Gezahegn University of Trieste ICTP,Trieste

ParallelKnoppix Tutorial

Hawk Server for Linux. Installation Guide. Beta Version MHInvent Limited. All rights reserved.

Installation Tools for Clusters. Rajesh K., Computer Division, BARC

Linux Diskless iscsi Boot HowTo ( V1.0)

Docker task in HPC Pack

Setting up a Chaincoin Masternode

If you had a freshly generated image from an LCI instructor, make sure to set the hostnames again:

Welcome to getting started with Ubuntu Server. This System Administrator Manual. guide to be simple to follow, with step by step instructions

Linux Systems Administration Getting Started with Linux

Linux Essentials Objectives Topics:

Install and Configure Ubuntu on a VirtualBox Virtual Machine

LiveCD Customization. Creating your own Linux distribution

Stop all processes and then reboot - same as above startx. Log in as superuser from current login exit

RG-MACC_2.0 Installation Manual

Parallel Panther Beowulf Cluster

Network Configuration for Cisco UCS Director Baremetal Agent

Creating a low end server for SOHO.

pulsarvmlite v Installation and Usage

Telestra Diskless Installation Guide. Document: DOC-TEL-DSLS-IG-C-0

= Session-(1.4) Preparing the Workstation for the Lab / OS Installation = Session-(1.4) Preparing the Workstation for the Lab / OS Installation

How To Install SecurePlatform with PXE

Linux Cluster Manager User s Guide

Network Drawing. Computer Specs, I ve used. Installing a Network-Based Intrusion Detection

ADINA DMP System 9.3 Installation Notes

Raspberry Pi Network Boot

Using KVM On Ubuntu 7.10 (Gutsy Gibbon)

Thousands of Linux Installations (and only one administrator)

EX L-8 User Guide

Appliance Guide. Version 1.0

Variation on AFS as root filesystem

Ubuntu Set Static Ip Address Command Line

Variation on AFS as root filesystem

MVAPICH MPI and Open MPI

Dell IoT Gateway 5500 Edge MicroServer Setup Guide

Containers. Pablo F. Ordóñez. October 18, 2018

OCTVQE Zaptel Echo Canceller (PRELIMINARY)

Automated Instance failover Using IBM DB2 High Availability Instance Configuration Utility (db2haicu) on shared storage

Inception Cloud User s Guide

PiCloud. Building owncloud on a Raspberry PI

Configuring a Standalone VCL Environment using VMware Server 2.0

USING NGC WITH GOOGLE CLOUD PLATFORM

Unit- 5. Linux Systems

StampA5D3x/PortuxA5/PanelA5. Quickstart Guide

Installation of the OS

Automated Instance failover Using IBM DB2 High Availability Instance Configuration Utility (db2haicu) on shared storage (AIX/Linux)

Overview LEARN. History of Linux Linux Architecture Linux File System Linux Access Linux Commands File Permission Editors Conclusion and Questions

Zephyr Kernel Installation & Setup Manual

Lab Working with Linux Command Line

SETTING UP SSH FOR YOUR PARALLELLA: A TUTORIAL FOR STUDENTS

Using Juju with a Local Provider with KVM and LXC in Ubuntu LTS

TECHNICAL WHITE PAPER. Using Stateless Linux with Veritas Cluster Server. Linux

Lab4 Embedded Linux. Introduction

Linux. Computer networks - Administration 1DV202. fredag 30 mars 12

Precursor Steps & Storage Node

Sysinstall main menu. Move the cursor down by using the DOWN-ARROW and select Standard.

Development Environment Embedded Linux Primer Ch 1&2

How to securely isolate Damn Vulnerable Linux with VirtualBox

ADINA DMP System 9.3 Installation Notes

Siemens PLM Software. HEEDS MDO Setting up a Windows-to- Linux Compute Resource.

Contents. Note: pay attention to where you are. Note: Plaintext version. Note: pay attention to where you are... 1 Note: Plaintext version...

Installation Guide. Scyld ClusterWare Release g0000. November 17, 2014

Actual4Test. Actual4test - actual test exam dumps-pass for IT exams

Cluster-NFS: Simplifying Linux Clusters

Pengwyn Documentation

Manually Change Default Gateway Centos Command Line Windows

Step by Step Installation of CentOS Linux 7 and Active Circle

MontaVista Linux Preview Kit for Professional Edition 2.1 for the MIPS Technologies, Inc. Malta 4KC and 5KC (Big-Endian)

Linux. An introduction. Aurélien Villani 01/2018

Linux Cluster Manager Operations Manual

Installing caos with Cinch on Floppy Disk

RG-MACC-BASE_v2.01. Installation Guide

Cisco Modeling Labs OVA Installation

Installing Cisco Multicast Manager

Embedded System Design

FUJITSU BLADES BX300 PXE INSTALLATION

Using CFEngine with Open Nebula A CFEngine Special Topics Handbook

Computing with the Moore Cluster

Installation Guide. Scyld ClusterWare Release g0000. December 18, 2013

WLM1200-RMTS User s Guide

Preparing Your Google Cloud VM for W4705

SAMA5D3x-CM Instruction to program images with PEEDI

Transcription:

The TinyHPC Cluster Mukarram Ahmad Abstract TinyHPC is a beowulf class high performance computing cluster with a minor physical footprint yet significant computational capacity. The system is of the shared memory category, and uses the Message Passing Interface (MPI) API to allow communication between nodes. The primary intended usage of TinyHPC is to run fluid dynamics simulations, as well as to test a new data compression technique that uses neural networks. 1 Introduction This section discusses the basic definition of beowulf clustering, as well as advantages of constructing a cluster of such a configuration. 1.1 Definition A beowulf system is a high performance computing infrastructure constructed from commidity hardware and running open source software and operating system. Such a setup typical consists of a server that is connected to a number of headless, diskless nodes over some form of a network [3]. Generally, in such a system, the server node assigns diskless nodes their IP addresses, provides support to allow each node to boot over the network, and hosts networked storage to which all nodes have access [5]. Beowulf systems are sometimes classified as being Class I or Class II, with the former being entirely composed of commidity hardware and thus less expensive and easily reproducible, while the latter using more specialized components to acheive better performance at a higher cost [2]. 1

1.2 Rationale Aside from the general speedup advantages of using multiple CPUs, the primary motivation to construct a beowulf computing cluster is the extremely low construction cost relative to traditional high performance computing systems. Also, since the speeds of memory modules is not necessarily at pace with that of processors, usage of multiple sets of these executing tasks in parallel provides a way to circumvent this gap. 2 Related Work Mention a listing and brief descriptions of related small footprint beowulf systems. 3 System Design Appendix A shows the hardware configuration of TinyHPC. The following two sections describe the system in more detail. 3.1 Hardware Setup The TinyHPC cluster has a hybrid hardware composition, with two sets of nodes. Table 1 describes the specifications of each set of compute nodes. Node Set 1 Node Set 2 Motherboard Toshiba A135-S2356 Intel 815GEW Processor Architecture Intel Pentium 4 Intel Celeron Processor Clock Speed 1.5 GHz 1.3 GHz Processor Cache (Level 1) 64 KB 32 KB Processor Cache (Level 2) 1024 KB 256 KB Main Memory 256 MB 256 MB Network Interconnect 100 MB/s Ethernet 100 MB/s Ethernet 2

3.2 Server Disk Partitioning The head node of the cluster contains three hard drives, with the sizes of 40 GB, 20 GB, and 10 GB respectively. For use in TinyHPC, they were set up as follows: Disk / Partition Size Mount Point Type Usage Disk 1 / Partition 1 40 GB /home Ext3 Storage space for home directory Disk 2 / Partition 1 10 GB / Ext3 Root file system Disk 2 / Partition 2 1 GB swap Linux swap space Disk 3 / Partition 1 10 GB /nodes Ext3 Node root file system 3.3 Operating System Installation The first step was to install the base operating system for the server. Ubuntu Server was chosen due to its lean nature in terms of initial software inclusion. The installation disk was used to customize the disks according the table listed in the previous section. The operating system installation then proceeded as follows: Ubuntu Server was installed with the root filesystem in the first partition of the second disk, home directory in the first disk, and swap space in the second partition of the second disk. The third disk was assigned a manual mount point of /nodes. All partitions except swap were Ext3 journaling filesystems. The installation was repeated (disk partitioning skipped this time, since this was already done), and this time, the entire filesystem was installed on the third disk. 3.4 Operating System Customization The first step was to install the required packages to the rather basic Ubuntu Server installation. A connection to the internet was hence required, and 3

the second network card of the head node was used for this purpose. The following lines were added to the /etc/network/interfaces file: auto eth1 iface eth1 inet dhcp This loads the interface eth1 on startup, and instructs it to gets its configuration information via DHCP. The next step was to update the package lists. All the sources (except the CD) were uncommented from the /etc/apt/- sources.list file, and the following command was issued to update the package lists: sudo apt - get update The packages were then downloaded and installed; it is important to note that the process was repeated for the nodes root file system also, by chrooting to the /nodes directory as follows: sudo chroot / nodes / bin / bash The packages were then installed; the commands are broken down to ease later explanation. sudo apt - get install ssh rsh - client rsh - server sudo apt - get install dhcp3 - server tftpd - hpa nfs - kernel - server sudo apt - get install python - central python - dev python - gd python - numpy python - scipy python - matplotlib sudo apt - get install make gcc g ++ gfortran g77 libc6 - dev sudo apt - get install xorg fluxbox sudo apt - get install libatlas - headers libatlas3gf - base In the above package installation script, the ssh and rsh software were installed in order to improve control over the nodes, and satisfy the requirement of the MPI packages to be later installed. The DHCP, TFTP, and NFS servers were installed in order to provide network booting capabilities for the client nodes. The python packages were installed for later installation of pympi. Packages xorg and the Fluxbox window manager were optional, but 4

were installed for ease of use in occassional situations when performance was not an issue. The make package, and the C, C++, and Fortran compilers were installed to allow building software from source. Finally, the ATLAS library was installed to later build the HPL Linpack benchmark. The installation was repeated after chrooting the /nodes directory. Next, the operating system of the nodes was to be configured. Three things were done here; first, the /etc/fstab file was edited to remove all lines except that which mounts /proc, and to add: 10.11.12.1:/ home / home nfs auto defaults 0 0 This basically mounts the shared /home of the server as a replacement of its own home directory. This is desired so that software built from source, as well as other work files, can be easily shared among the nodes of the cluster. After that, the inital ramdisk was customized so that the root file system was done mounted over NFS. This was done by modifying the BOOT=local line to BOOT = nfs and then regenerating the ramdisk by entering update - initramfs -u The names of the kernel and the initial ramdisk were then noted from the /nodes/boot directory. Finally, the /etc/rc.local file was modified to customize the /etc/hosts file of the nodes on boot with appropriate IP and hostname, and display that and a welcome screen at startup. clear sleep 1 echo "==== ========= ======== ======== =====" echo " TinyHPC " echo "==== ========= ======== ======== =====" sleep 2 echo " Welcome to the TinyHPC Cluster!" 5

sleep 1 echo " Copyright ( C) 2008 Mukarram Ahmad." echo "----------------------------------" DATEVAL = date MYIP = ifconfig grep -i " Ethernet " -A 1 grep " inet addr " cut -d " " -f 12 cut -d ":" -f 2 echo " Today, the date is: $DATEVAL " echo " Also, the IP of this node is $MYIP " echo "----------------------------------" echo " The Boot Process is Now Complete. Please Login Below." echo "----------------------------------" rm -f / etc / hosts touch / etc / hosts MYHOSTNAME = / bin / hostname echo " $MYIP $MYHOSTNAME " > / etc / hosts exit 0 Once the required packages were installed, and the node operating system was modified to a satisfactory level, configuration of the server node was proceeded with. The first step was to set up network booting, so that the client nodes could boot over PXE. To do this, the configuration files for the DHCP server, the TFTP server, and the Network File System server had to be modified. The DHCP configuration file, located as the /etc/dhcp3/dhcpd.conf, was modified to have the following content: allow booting ; allow bootp ; default - lease - time 600; max - lease - time 7200; subnet 10.11.12.0 netmask 255.255.255.0 { next - server 10.11.12.1; filename " pxelinux.0"; option subnet - mask 255.255.255.0; range 10.11.12.2 10.11.12.50; 6

} This basically defines the network on which the DHCP server is to run on, and the name of the file from which the nodes boot. The TFTP server s configuration file, with the path /etc/default/tftpd-hpa, had the following, which basically made sure that the TFTP daemon ran on the server, and specified the location of the TFTP root, where the files for booting were to be later placed. RUN_DAEMON =" yes " OPTIONS ="-l -s / tftpboot " Finally, the /etc/exports file was set up so that the right directories were exported via NFS - namely the /nodes directory, where the client node root file system is held, and the /home directory, which is to be shared among the cluster nodes, as previously described. / nodes 10.11.12.0/24( rw, no_root_squash, no_subtree_check ) / home 10.11.12.0/24( rw, no_root_squash, no_subtree_check ) Having set up the configuration files, the three servers were restarted, after issuing a command that configures the network interface over which the booting occurs, all this being done as root: ifconfig eth0 10.11.12.1 netmask 255.255.255.0 broadcast 10.11.12.25 / etc / init.d/ dhcp3 - server restart / etc / init.d/ tftpd - hpa restart / etc / init.d/nfs - kernel - server restart After testing this setup by booting one client node to make sure it works, SSH was set up so that it did not prompt for passwords: ssh - keygen -q -t rsa -N "" -f " $HOME /. ssh / id_rsa " cp $HOME /. ssh / id_rsa. pub $HOME /. ssh / authorized_keys chmod 600 $HOME /. ssh / authorized_keys 3.5 MPI Installation MPICH2 was chosen as the MPI implementation for the cluster and was installed in the shared /home directory with shared libraries enabled by issuing 7

as root: mkdir $HOME / MPICH2./ configure -- prefix = $HOME / MPICH2 --enable - sharedlibs = gcc make make install 4 Performance Figures The system has a theoritical peak of 28 GFLOPS ((1.5 GHz x 5 Cores x 2) + (1.3 Ghz x 5 Cores x 2)) TBD - Coming Soon 5 Applications TBD - Coming Soon 6 Conclusion TBD - Coming Soon References [1] Beowulf.org [2] Beowulf.org class i and ii page [3] Beowulf book [4] Microwulf [5] TBD [6] TBD 8