PC (PC Cluster Building and Parallel Computing)

Size: px
Start display at page:

Download "PC (PC Cluster Building and Parallel Computing)"

Transcription

1

2 PC (PC Cluster Building and Parallel Computing)

3 כ., כ כ. כ. PC. כ. כ. 1 כ, 2. 1 PC. PC. 2. MPI(Message Passing Interface),. 3

4 1 PC 7 1 PC (Master computer) Linux Clustering (Util) Message Passing Interface(MPI) (Parallel Computing) Sequential MPI GNUPLOT

5 1 PC 1 PC (Cluster). Cluster כ (Master) MPI(Message Passing Interface). 3. (Slave) MPI PC כ. 1.1 ( DB-Z70)., 1 Master Slave. 7

6 1 PC 9 (Switching Hub) ( 1.3) (LAN). 1.3: Route KVM,,. כ.,,. KVM(Keyboard, Video, Mouse). 1.4: Switch

7 1 PC : :,, ( 1.8 ) PC. 1.9,,. PC. PC. 1.8: 1.10 KVM PC.. LED PC.

8 1 PC 13 כ LAN כ 2 LAN. כ 1.11: LAN 1.12:

9 1 PC 15 rsh-server rlogin, rsh rsh, rsh-server. rsh rsh-server. rlogin, rsh MPI כ. lam-mpi LAM/MPI The Laboratory for Scientific Computing(LSC) at the University of Notre Dame MPI. MPI-2 MPICH2. LAM/MPI MPICH, MPICH2 LAM/MPI PVM. Fedora Download 1. KAIST FTP Server Download : ftp://ftp.kaist.ac.kr/fedora/linux/releases/11/fedora/i386/iso/ Fedora-11-i386-DVD.iso. 1.14: KAIST FTP Server

10 1 PC Directroy 1.17: 11 Directroy - Fedora Directroy 1.18: Fedora Directroy

11 2 (MASTER COMPUTER) LINUX 19 - Fedora-11-i386-DVD.iso. 1.21: Fedora iso Download 2 (Master computer) Linux * Fedora 11.. BIOS Fedora11 DVD CDROM. CDROM BIOS CDROM. BIOS F2. CDROM F10 BIOS.

12 2 (MASTER COMPUTER) LINUX 21 Media Test CD Tab [Skip] Enter. Next. 1.23: Media Test Language. English(English) Next 1.24: Language

13 2 (MASTER COMPUTER) LINUX : 1.27:

14 2 (MASTER COMPUTER) LINUX 25 Partition Hard Partition Hard Create custom layout. Next. 1.30: Partition Hard

15 2 (MASTER COMPUTER) LINUX 27 New Partition. Mount Point / File System Type ext3 Additional Size Options Fill to maximum allowable size. OK. Next. Yes. 1.32: Partition Add 1.33: Partition Warning

16 2 (MASTER COMPUTER) LINUX 29 Include Application Office and Productivity Software Development. LAM-MPI כ Software Development. Next. ( ) 1.36: Include Application (a) (b) 1.37: (a) Install (b) Reboot Reboot Fedora 11DVD CDROM.

17 2 (MASTER COMPUTER) LINUX : Create User 1.40: Date And Time

18 2 (MASTER COMPUTER) LINUX : Login 1.43: Main

19 2 (MASTER COMPUTER) LINUX 35 ( eth1. Network configuration Hardware Description..) Terminal system-config-network. (eth1) Controlled by NetworkManager Activate device when computer starts Statically set IP address Address, Subnet mask, Default gateway address, Primary DNS, secondary DNS. IP. OK File > Save. Quit. Address : Subnet mask : Default gateway address : Primary DNS : Secondary DNS :

20 2 (MASTER COMPUTER) LINUX 37 ( eth0.) Terminal system-config-network. (et0) Controlled by NetworkManager Activate device when computer starts Statically set IP address Address, Subnet mask. IP. IP. OK File > Save. Quit. ( 1.46) Address : Subnet mask : ( 1.47) Address : Subnet mask : Network network Terminal service network restart.

21 2 (MASTER COMPUTER) LINUX : Network

22 2 (MASTER COMPUTER) LINUX 41 xinetd fc10.i386.rpm rpm -ivh xinetd fc10.i386.rpm. 1.49: Rpm Xinetd rsh-server fc10.i386.rpm rpm -ivh rsh-server fc10.i386.rpm. 1.50: Rpm Rsh-Server LAM-MPI tar -C /home/korea/download -zxvf lam-7.1.5b2.tar.gz. -C. korea. lam-7.1.5b2. cd /home/korea/download/lam-7.1.5b2/.

23 2 (MASTER COMPUTER) LINUX : configure make make LAMMPI Build. Terminal make. 1.54: make

24 2 (MASTER COMPUTER) LINUX : LAMMPI bashrc Terminal source ~/.bahsrc. 1.57: bashrc ntsysv Terminal ntsysv.. nfs, rexec, rlogin, rpcbind, rsh,. Tab. Tab Enter.,. NFS(Network File System) NFS Sun Microsystems כ TCP/IP

25 2 (MASTER COMPUTER) LINUX 47. rpc RPC., ntsysv. Terminal service xinetd restart Starting xinetd OK כ. 1.59: Xinetd Restart

26 2 (MASTER COMPUTER) LINUX 49 Exports ( ) Terminal vi /etc/exports. /etc/exports. /home /home node1(rw,async,no_root_squash) node2(rw,async,no_root_squash) 1.61: Exports Terminal service nfs restart. Starting NFS OK כ. 1.62: NFS Restart portmapper RPC portmapper RPC TCP/IP( UDP/IP). כ RPC

27 2 (MASTER COMPUTER) LINUX 51 Host /etc/hosts.equiv. host /etc/hosts. Terminal vi /etc/hosts.equiv. node 2, host node1 node2, 1.65 node1, node : Hosts

28 3 53 mount Terminal vi /etc/rc.local.. sleep 7 mount -a 1.67: Mount mount mount, mount. Terminal mount -t nfs node1:/home /home. 1.68: Mount

29 3 55 lamboot LAM/MPI MPI Job Node lamd(lam demon), lamboot. Cluster Node. Terminal root vi lamhosts.. node1 node : Cluster Node lamboot lamd. Terminal lamboot -v lamhosts( ). 1.72: Node

30 2,. GNUPLOT. 1 Message Passing Interface(MPI) C, C++, FORTRAN. Message Passing Interface(MPI). MPI-I MPI-II OpenMP MPI (Parallel Computing), (mpicc) (mpirun) 57

31 3 SEQUENTIAL 59 } C compiler gcc compile. gcc -o hello.exe hello.c ( 2.1). 2.1: hello.c Compile compile../hello.exe ( 2.2).. 2.2: hello.exe 1 C כ. (4.3). #include <stdio.h> #include <math.h> #include <stdlib.h>

32 4 61 for (i=0; i<=n; i++) { printf("%f\n", h[i]);} free_dvector(h,0,n+1); free_dvector(h_new,0,n+1); return 0; } double *dvector (long nl, long nh) { double *v; v=(double *) malloc((nh-nl+1+1)*sizeof(double)); return v-nl+1; } void free_dvector(double *v, long nl, long nh) { free (v+nl-1); return; } 4 (CPU) 0 כ. p, rank 0, 1,,p 1 כ. 0. כ

33 4 63. CPU p כ 2. my rank p כ.., (my rank) (p) MPI_Send MPI_Recv. MPI MPI_Finalize(void); MPI.. MPI. serial.. MPI mpicc. -o. Terminal mpicc -o (.exe) ( _.c)... mpirun. כ.. Terminal mpirun -np ( ) (.exe). -np. כ p כ.

34 4 65 from process 1!) lamboot -v lamhosts lamboot. /******************** hellompi.c **********************/ #include <stdio.h> #include <string.h> #include "mpi.h" void main(int argc, char* argv[]) { int my_rank, p, source, dest, tag=0; char message[100]; MPI_Status status; MPI_Init(&argc, &argv); MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); MPI_Comm_size(MPI_COMM_WORLD, &p); if (my_rank!= 0) { sprintf(message, "Greetings from process %d!", my_rank); dest = 0; MPI_Send(message, strlen(message)+1, MPI_CHAR, dest, tag, MPI_COMM_WORLD); } else { for (source = 1; source < p; source++) { MPI_Recv(message, 100, MPI_CHAR, source, tag, MPI_COMM_WORLD, &status); printf("%s\n", message);

35 (2.2). d 2 u dx 2 = 0, 0 x 1 (2.1) u(0) = 0 and u(1) = 0 (2.2) Ω=[0, 1] [0,T =0.001] sine כ u(x, 0) = sin(πx) 1 (2.11). w(x, t) =e πt sin πx Taylor u(x + h, t) (x, t) u כ. u(x + h, t) =u(x, t)+u x (x, t)h + u xx(x, t) h 2 + (2.3) 2! u(x h, t) כ. u(x h, t) =u(x, t) u x (x, t)h + u xx(x, t) h 2 + (2.4) 2! (2.3) (2.4) (2.5). u(x + h, t) 2u(x, t)+u(x h, t) u xx (x, t) = h 2 + O(h 2 ) (2.5) כ (2.3). u t (x, t) = u(x, t +1) u(x, t) h 2 + O(h) (2.6)

36 4 69 כ כ. (2.5) M 0 כ. /***************** heat_mpi.c *********************/ # include <stdlib.h> # include <stdio.h> # include <math.h> # include "mpi.h" double *dvector (long nl, long nh); void free_dvector(double *v, long nl, long nh); int main ( int argc, char *argv[] ) { int id, p, i, it, it_max = 10, m = 4, tag; double pi, dt, h, *u_old, *u_new, start,etime; MPI_Init ( &argc, &argv ); start = MPI_Wtime(); MPI_Comm_rank ( MPI_COMM_WORLD, &id ); MPI_Comm_size ( MPI_COMM_WORLD, &p ); pi=4.0*atan(1.0); dt = 0.001; h=1.0/(2.0*m-1.0); u_old=dvector(0,m); u_new=dvector(0,m); for (i=0; i<=m; i++) {

37 4 71 for (i=0; i<m; i++) { printf("node %d %f\n",id,u_old[i+id]);} free_dvector(u_old,0,m); free_dvector(u_new,0,m); etime = (MPI_Wtime() - start); } if (id==1){ printf("tasks took %6.6f seconds\n",etime); } MPI_Finalize ( ); return 0; double *dvector (long nl, long nh) { double *v; v=(double *) malloc((nh-nl+1+1)*sizeof(double)); return v-nl+1; } void free_dvector(double *v, long nl, long nh) { free (v+nl-1); } dvector free dvector.

38 rank כ Communication Blocking Nonblocking. Blocking Communication rank., rank MPI Send rank MPI Recv.( (2.9)) Blocking MPI Send MPI Recv process כ. MPI Send MPI Recv 2.8: blocking כ

39 4 75 /******************* PART A ***********************/ MPI_Request requests[2*p]; MPI_Status status[2*p]; for (i=0; i<(2*p); i++) requests[i] = MPI_REQUEST_NULL; for (it=1; it<=it_max; it++) { if (id==1) { tag = 1; MPI_Isend(&u_old[1],1,MPI_DOUBLE,0,tag,MPI_COMM_WORLD, &requests[0] );} if (id==0) { tag = 1; MPI_Irecv(&u_old[m],1,MPI_DOUBLE,1,tag,MPI_COMM_WORLD, &requests[1]);} if (id==0) { tag = 2; MPI_Isend(&u_old[m-1],1,MPI_DOUBLE,1,tag,MPI_COMM_WORLD, &requests[2]);} if (id==1) { tag = 2; MPI_Irecv(&u_old[0],1,MPI_DOUBLE,0,tag,MPI_COMM_WORLD, &requests[3]);} /***************************************************/

40 4 77. plot "data.dat" using 1:2 title a with lines,\ "data.dat" using 1:3 title b with lines, data.dat 1 x 2 y a כ data.dat 1 x 3 y b כ. 2.10: GNUPLOT 1.. Gather heat1d.dat.

41 4 79 void free_dvector(double *v, long nl, long nh); int main ( int argc, char *argv[] ) { int id, p, i,it, it_max, m = 10, s_tag,r_tag; double pi, T, dt,l=1.0, h, *gather,*buffer,*exact_u, *x,*xp, *u_old, *u_new, start,etime; MPI_Init ( &argc, &argv ); start = MPI_Wtime(); MPI_Comm_rank ( MPI_COMM_WORLD, &id ); MPI_Comm_size ( MPI_COMM_WORLD, &p ); pi=4.0*atan(1.0); dt = 0.001; T=0.5; it_max=(int)(t/dt); h=(double)l/(p*m-1.0); x =dvector(1,m); xp =dvector(1,p*m); u_old =dvector(0,m+1); u_new =dvector(0,m+1); buffer =dvector(0,m-1); gather =dvector(1,p*m); exact_u=dvector(1,p*m); for (i=1; i<=m; i++){ x[i]=(double) 0+((i+m*id)-1)*h; } for (i=1; i<=p*m; i++){

42 4 81 } else { for (i=2; i<=m; i++ ) { u_new[i] = u_old[i]+dt*(u_old[i-1]-2.0*u_old[i] +u_old[i+1])/(h*h);} } } for (i=1; i<=m; i++) { u_old[i] = u_new[i];} for (i=1;i<=m;i++){ buffer[i-1]=u_old[i]; } MPI_Gather(buffer,m,MPI_DOUBLE, gather,m,mpi_double,0,mpi_comm_world); etime = (MPI_Wtime() - start); if (id==0) { FILE *fpp; for (i=1;i<=p*m;i++){ exact_u[i]=sin(pi*h*(i-1))*exp(-1*pow(pi,2)*t); } fpp=fopen("heat1d.dat", "w"); /* heat1d.dat.*/ for (i=1; i<=2*m; i++) { /*for.*/

43 [1],,,, MPI, Oxford ( ), 2010 [2], CPU, 2010, [3] Petersen W.P., Arbenz P, An Introduction to Parallel Computing, Oxford University Press, 2004 [4] Mattson Timonthy G., Sanders Veverly A., Massingill Berna L., Patterns for Parallel Programming, Addsison Wesley, 2005 [5] Gropp W., Lusk E., Skellum A. Using MPI Portable Parallel Programming with the Message-Passing Interface, 2/E, The MIT Press, 1999 [6] Pacheco P.S.,Parallel Programming with MPI,Morgan Kaufmann Publishers, 1997 [7] Quinn Micheal J., Parallel Programming in C with MPI and OPENMP, McGraw Hill,

Holland Computing Center Kickstart MPI Intro

Holland Computing Center Kickstart MPI Intro Holland Computing Center Kickstart 2016 MPI Intro Message Passing Interface (MPI) MPI is a specification for message passing library that is standardized by MPI Forum Multiple vendor-specific implementations:

More information

Parallel Processing Experience on Low cost Pentium Machines

Parallel Processing Experience on Low cost Pentium Machines Parallel Processing Experience on Low cost Pentium Machines By Syed Misbahuddin Computer Engineering Department Sir Syed University of Engineering and Technology, Karachi doctorsyedmisbah@yahoo.com Presentation

More information

CS 426. Building and Running a Parallel Application

CS 426. Building and Running a Parallel Application CS 426 Building and Running a Parallel Application 1 Task/Channel Model Design Efficient Parallel Programs (or Algorithms) Mainly for distributed memory systems (e.g. Clusters) Break Parallel Computations

More information

Distributed Memory Programming with Message-Passing

Distributed Memory Programming with Message-Passing Distributed Memory Programming with Message-Passing Pacheco s book Chapter 3 T. Yang, CS240A Part of slides from the text book and B. Gropp Outline An overview of MPI programming Six MPI functions and

More information

MPI: The Message-Passing Interface. Most of this discussion is from [1] and [2].

MPI: The Message-Passing Interface. Most of this discussion is from [1] and [2]. MPI: The Message-Passing Interface Most of this discussion is from [1] and [2]. What Is MPI? The Message-Passing Interface (MPI) is a standard for expressing distributed parallelism via message passing.

More information

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI CS 470 Spring 2017 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI

More information

Simple examples how to run MPI program via PBS on Taurus HPC

Simple examples how to run MPI program via PBS on Taurus HPC Simple examples how to run MPI program via PBS on Taurus HPC MPI setup There's a number of MPI implementations install on the cluster. You can list them all issuing the following command: module avail/load/list/unload

More information

Introduction to the Message Passing Interface (MPI)

Introduction to the Message Passing Interface (MPI) Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018

More information

PCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail.

PCAP Assignment I. 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail. PCAP Assignment I 1. A. Why is there a large performance gap between many-core GPUs and generalpurpose multicore CPUs. Discuss in detail. The multicore CPUs are designed to maximize the execution speed

More information

Message Passing Interface

Message Passing Interface MPSoC Architectures MPI Alberto Bosio, Associate Professor UM Microelectronic Departement bosio@lirmm.fr Message Passing Interface API for distributed-memory programming parallel code that runs across

More information

Introduction in Parallel Programming - MPI Part I

Introduction in Parallel Programming - MPI Part I Introduction in Parallel Programming - MPI Part I Instructor: Michela Taufer WS2004/2005 Source of these Slides Books: Parallel Programming with MPI by Peter Pacheco (Paperback) Parallel Programming in

More information

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI

CS 470 Spring Mike Lam, Professor. Distributed Programming & MPI CS 470 Spring 2018 Mike Lam, Professor Distributed Programming & MPI MPI paradigm Single program, multiple data (SPMD) One program, multiple processes (ranks) Processes communicate via messages An MPI

More information

MPI MESSAGE PASSING INTERFACE

MPI MESSAGE PASSING INTERFACE MPI MESSAGE PASSING INTERFACE David COLIGNON, ULiège CÉCI - Consortium des Équipements de Calcul Intensif http://www.ceci-hpc.be Outline Introduction From serial source code to parallel execution MPI functions

More information

High Performance Computing Course Notes Message Passing Programming I

High Performance Computing Course Notes Message Passing Programming I High Performance Computing Course Notes 2008-2009 2009 Message Passing Programming I Message Passing Programming Message Passing is the most widely used parallel programming model Message passing works

More information

To connect to the cluster, simply use a SSH or SFTP client to connect to:

To connect to the cluster, simply use a SSH or SFTP client to connect to: RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, phoenix.ce.rit.edu, serves as the master controller or head

More information

ITCS 4145/5145 Assignment 2

ITCS 4145/5145 Assignment 2 ITCS 4145/5145 Assignment 2 Compiling and running MPI programs Author: B. Wilkinson and Clayton S. Ferner. Modification date: September 10, 2012 In this assignment, the workpool computations done in Assignment

More information

Outline. Introduction to HPC computing. OpenMP MPI. Introduction. Understanding communications. Collective communications. Communicators.

Outline. Introduction to HPC computing. OpenMP MPI. Introduction. Understanding communications. Collective communications. Communicators. Lecture 8 MPI Outline Introduction to HPC computing OpenMP MPI Introduction Understanding communications Collective communications Communicators Topologies Grouping Data for Communication Input / output

More information

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/ Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point

More information

ME964 High Performance Computing for Engineering Applications

ME964 High Performance Computing for Engineering Applications ME964 High Performance Computing for Engineering Applications Overview of MPI March 24, 2011 Dan Negrut, 2011 ME964 UW-Madison A state-of-the-art calculation requires 100 hours of CPU time on the state-of-the-art

More information

mith College Computer Science CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut

mith College Computer Science CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut mith College CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut dthiebaut@smith.edu Introduction to MPI D. Thiebaut Inspiration Reference MPI by Blaise Barney, Lawrence Livermore National

More information

Message Passing Interface

Message Passing Interface Message Passing Interface by Kuan Lu 03.07.2012 Scientific researcher at Georg-August-Universität Göttingen and Gesellschaft für wissenschaftliche Datenverarbeitung mbh Göttingen Am Faßberg, 37077 Göttingen,

More information

To connect to the cluster, simply use a SSH or SFTP client to connect to:

To connect to the cluster, simply use a SSH or SFTP client to connect to: RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, cluster-head.ce.rit.edu, serves as the master controller or

More information

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Introduction to MPI Ekpe Okorafor School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Topics Introduction MPI Model and Basic Calls MPI Communication Summary 2 Topics Introduction

More information

HPC Parallel Programing Multi-node Computation with MPI - I

HPC Parallel Programing Multi-node Computation with MPI - I HPC Parallel Programing Multi-node Computation with MPI - I Parallelization and Optimization Group TATA Consultancy Services, Sahyadri Park Pune, India TCS all rights reserved April 29, 2013 Copyright

More information

NUMERICAL PARALLEL COMPUTING

NUMERICAL PARALLEL COMPUTING Lecture 5, March 23, 2012: The Message Passing Interface http://people.inf.ethz.ch/iyves/pnc12/ Peter Arbenz, Andreas Adelmann Computer Science Dept, ETH Zürich E-mail: arbenz@inf.ethz.ch Paul Scherrer

More information

MPI. (message passing, MIMD)

MPI. (message passing, MIMD) MPI (message passing, MIMD) What is MPI? a message-passing library specification extension of C/C++ (and Fortran) message passing for distributed memory parallel programming Features of MPI Point-to-point

More information

int sum;... sum = sum + c?

int sum;... sum = sum + c? int sum;... sum = sum + c? Version Cores Time (secs) Speedup manycore Message Passing Interface mpiexec int main( ) { int ; char ; } MPI_Init( ); MPI_Comm_size(, &N); MPI_Comm_rank(, &R); gethostname(

More information

Parallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy:

Parallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy: COMP528 MPI Programming, I www.csc.liv.ac.uk/~alexei/comp528 Alexei Lisitsa Dept of computer science University of Liverpool a.lisitsa@.liverpool.ac.uk Flynn s taxonomy: Parallel hardware SISD (Single

More information

Outline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM

Outline. Communication modes MPI Message Passing Interface Standard. Khoa Coâng Ngheä Thoâng Tin Ñaïi Hoïc Baùch Khoa Tp.HCM THOAI NAM Outline Communication modes MPI Message Passing Interface Standard TERMs (1) Blocking If return from the procedure indicates the user is allowed to reuse resources specified in the call Non-blocking

More information

MPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI

MPI MPI. Linux. Linux. Message Passing Interface. Message Passing Interface. August 14, August 14, 2007 MPICH. MPI MPI Send Recv MPI Linux MPI Linux MPI Message Passing Interface Linux MPI Linux MPI Message Passing Interface MPI MPICH MPI Department of Science and Engineering Computing School of Mathematics School Peking University

More information

Assignment 3 MPI Tutorial Compiling and Executing MPI programs

Assignment 3 MPI Tutorial Compiling and Executing MPI programs Assignment 3 MPI Tutorial Compiling and Executing MPI programs B. Wilkinson: Modification date: February 11, 2016. This assignment is a tutorial to learn how to execute MPI programs and explore their characteristics.

More information

Introduction to parallel computing concepts and technics

Introduction to parallel computing concepts and technics Introduction to parallel computing concepts and technics Paschalis Korosoglou (support@grid.auth.gr) User and Application Support Unit Scientific Computing Center @ AUTH Overview of Parallel computing

More information

MPI Message Passing Interface

MPI Message Passing Interface MPI Message Passing Interface Portable Parallel Programs Parallel Computing A problem is broken down into tasks, performed by separate workers or processes Processes interact by exchanging information

More information

CS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010.

CS4961 Parallel Programming. Lecture 18: Introduction to Message Passing 11/3/10. Final Project Purpose: Mary Hall November 2, 2010. Parallel Programming Lecture 18: Introduction to Message Passing Mary Hall November 2, 2010 Final Project Purpose: - A chance to dig in deeper into a parallel programming model and explore concepts. -

More information

Introduction to MPI. HY555 Parallel Systems and Grids Fall 2003

Introduction to MPI. HY555 Parallel Systems and Grids Fall 2003 Introduction to MPI HY555 Parallel Systems and Grids Fall 2003 Outline MPI layout Sending and receiving messages Collective communication Datatypes An example Compiling and running Typical layout of an

More information

Programming with MPI. Pedro Velho

Programming with MPI. Pedro Velho Programming with MPI Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage - Who might be interested in those applications?

More information

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc.

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc. Introduction to MPI SHARCNET MPI Lecture Series: Part I of II Paul Preney, OCT, M.Sc., B.Ed., B.Sc. preney@sharcnet.ca School of Computer Science University of Windsor Windsor, Ontario, Canada Copyright

More information

Parallel Programming Assignment 3 Compiling and running MPI programs

Parallel Programming Assignment 3 Compiling and running MPI programs Parallel Programming Assignment 3 Compiling and running MPI programs Author: Clayton S. Ferner and B. Wilkinson Modification date: October 11a, 2013 This assignment uses the UNC-Wilmington cluster babbage.cis.uncw.edu.

More information

Distributed Simulation in CUBINlab

Distributed Simulation in CUBINlab Distributed Simulation in CUBINlab John Papandriopoulos http://www.cubinlab.ee.mu.oz.au/ ARC Special Research Centre for Ultra-Broadband Information Networks Outline Clustering overview The CUBINlab cluster

More information

COSC 6374 Parallel Computation. Message Passing Interface (MPI ) I Introduction. Distributed memory machines

COSC 6374 Parallel Computation. Message Passing Interface (MPI ) I Introduction. Distributed memory machines Network card Network card 1 COSC 6374 Parallel Computation Message Passing Interface (MPI ) I Introduction Edgar Gabriel Fall 015 Distributed memory machines Each compute node represents an independent

More information

An Introduction to MPI

An Introduction to MPI An Introduction to MPI Parallel Programming with the Message Passing Interface William Gropp Ewing Lusk Argonne National Laboratory 1 Outline Background The message-passing model Origins of MPI and current

More information

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho

Programming with MPI on GridRS. Dr. Márcio Castro e Dr. Pedro Velho Programming with MPI on GridRS Dr. Márcio Castro e Dr. Pedro Velho Science Research Challenges Some applications require tremendous computing power - Stress the limits of computing power and storage -

More information

Supercomputing in Plain English

Supercomputing in Plain English Supercomputing in Plain English An Introduction to High Performance Computing Part VI: Distributed Multiprocessing Henry Neeman, Director The Desert Islands Analogy Distributed Parallelism MPI Outline

More information

Introduction to Parallel Programming Message Passing Interface Practical Session Part I

Introduction to Parallel Programming Message Passing Interface Practical Session Part I Introduction to Parallel Programming Message Passing Interface Practical Session Part I T. Streit, H.-J. Pflug streit@rz.rwth-aachen.de October 28, 2008 1 1. Examples We provide codes of the theoretical

More information

Message-Passing Computing

Message-Passing Computing Chapter 2 Slide 41þþ Message-Passing Computing Slide 42þþ Basics of Message-Passing Programming using userlevel message passing libraries Two primary mechanisms needed: 1. A method of creating separate

More information

Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS. Teacher: Jan Kwiatkowski, Office 201/15, D-2

Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS. Teacher: Jan Kwiatkowski, Office 201/15, D-2 Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS Teacher: Jan Kwiatkowski, Office 201/15, D-2 COMMUNICATION For questions, email to jan.kwiatkowski@pwr.edu.pl with 'Subject=your name.

More information

CS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011.

CS4961 Parallel Programming. Lecture 16: Introduction to Message Passing 11/3/11. Administrative. Mary Hall November 3, 2011. CS4961 Parallel Programming Lecture 16: Introduction to Message Passing Administrative Next programming assignment due on Monday, Nov. 7 at midnight Need to define teams and have initial conversation with

More information

Parallel Programming, MPI Lecture 2

Parallel Programming, MPI Lecture 2 Parallel Programming, MPI Lecture 2 Ehsan Nedaaee Oskoee 1 1 Department of Physics IASBS IPM Grid and HPC workshop IV, 2011 Outline 1 Introduction and Review The Von Neumann Computer Kinds of Parallel

More information

MPI Mechanic. December Provided by ClusterWorld for Jeff Squyres cw.squyres.com.

MPI Mechanic. December Provided by ClusterWorld for Jeff Squyres cw.squyres.com. December 2003 Provided by ClusterWorld for Jeff Squyres cw.squyres.com www.clusterworld.com Copyright 2004 ClusterWorld, All Rights Reserved For individual private use only. Not to be reproduced or distributed

More information

Lecture 7: Distributed memory

Lecture 7: Distributed memory Lecture 7: Distributed memory David Bindel 15 Feb 2010 Logistics HW 1 due Wednesday: See wiki for notes on: Bottom-up strategy and debugging Matrix allocation issues Using SSE and alignment comments Timing

More information

MPI and comparison of models Lecture 23, cs262a. Ion Stoica & Ali Ghodsi UC Berkeley April 16, 2018

MPI and comparison of models Lecture 23, cs262a. Ion Stoica & Ali Ghodsi UC Berkeley April 16, 2018 MPI and comparison of models Lecture 23, cs262a Ion Stoica & Ali Ghodsi UC Berkeley April 16, 2018 MPI MPI - Message Passing Interface Library standard defined by a committee of vendors, implementers,

More information

The Message Passing Model

The Message Passing Model Introduction to MPI The Message Passing Model Applications that do not share a global address space need a Message Passing Framework. An application passes messages among processes in order to perform

More information

Distributed Memory Programming with MPI

Distributed Memory Programming with MPI Distributed Memory Programming with MPI Part 1 Bryan Mills, PhD Spring 2017 A distributed memory system A shared memory system Identifying MPI processes n Common pracace to idenafy processes by nonnegaave

More information

Practical Introduction to Message-Passing Interface (MPI)

Practical Introduction to Message-Passing Interface (MPI) 1 Practical Introduction to Message-Passing Interface (MPI) October 1st, 2015 By: Pier-Luc St-Onge Partners and Sponsors 2 Setup for the workshop 1. Get a user ID and password paper (provided in class):

More information

Introduction to MPI: Part II

Introduction to MPI: Part II Introduction to MPI: Part II Pawel Pomorski, University of Waterloo, SHARCNET ppomorsk@sharcnetca November 25, 2015 Summary of Part I: To write working MPI (Message Passing Interface) parallel programs

More information

Parallel Programming

Parallel Programming Parallel Programming MPI Part 1 Prof. Paolo Bientinesi pauldj@aices.rwth-aachen.de WS17/18 Preliminaries Distributed-memory architecture Paolo Bientinesi MPI 2 Preliminaries Distributed-memory architecture

More information

Linux Clusters Institute: Introduction to MPI. Presented by: David Swanson, Director, HCC Based upon slides by: Henry Neeman, University of Oklahoma

Linux Clusters Institute: Introduction to MPI. Presented by: David Swanson, Director, HCC Based upon slides by: Henry Neeman, University of Oklahoma Linux Clusters Institute: Introduction to MPI Presented by: David Swanson, Director, HCC Based upon slides by: Henry Neeman, University of Oklahoma Outline MPI Role Playing Exercise Distributed Parallelism

More information

Parallel Computing Paradigms

Parallel Computing Paradigms Parallel Computing Paradigms Message Passing João Luís Ferreira Sobral Departamento do Informática Universidade do Minho 31 October 2017 Communication paradigms for distributed memory Message passing is

More information

Lesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems.

Lesson 1. MPI runs on distributed memory systems, shared memory systems, or hybrid systems. The goals of this lesson are: understanding the MPI programming model managing the MPI environment handling errors point-to-point communication 1. The MPI Environment Lesson 1 MPI (Message Passing Interface)

More information

Supercomputing in Plain English

Supercomputing in Plain English Supercomputing in Plain English Distributed Multiprocessing Henry Neeman Director OU Supercomputing Center for Education & Research November 19 2004 The Desert Islands Analogy Distributed Parallelism MPI

More information

Lecture 3 Message-Passing Programming Using MPI (Part 1)

Lecture 3 Message-Passing Programming Using MPI (Part 1) Lecture 3 Message-Passing Programming Using MPI (Part 1) 1 What is MPI Message-Passing Interface (MPI) Message-Passing is a communication model used on distributed-memory architecture MPI is not a programming

More information

MPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group

MPI: Parallel Programming for Extreme Machines. Si Hammond, High Performance Systems Group MPI: Parallel Programming for Extreme Machines Si Hammond, High Performance Systems Group Quick Introduction Si Hammond, (sdh@dcs.warwick.ac.uk) WPRF/PhD Research student, High Performance Systems Group,

More information

Parallel Programming & Cluster Computing Distributed Multiprocessing

Parallel Programming & Cluster Computing Distributed Multiprocessing Parallel Programming & Cluster Computing Distributed Multiprocessing Henry Neeman, University of Oklahoma Charlie Peck, Earlham College Tuesday October 11 2011 The Desert Islands Analogy Distributed Parallelism

More information

A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008

A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008 A few words about MPI (Message Passing Interface) T. Edwald 10 June 2008 1 Overview Introduction and very short historical review MPI - as simple as it comes Communications Process Topologies (I have no

More information

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam

Programming Scalable Systems with MPI. Clemens Grelck, University of Amsterdam Clemens Grelck University of Amsterdam UvA / SurfSARA High Performance Computing and Big Data Course June 2014 Parallel Programming with Compiler Directives: OpenMP Message Passing Gentle Introduction

More information

Introduction to MPI HPC Workshop: Parallel Programming. Alexander B. Pacheco

Introduction to MPI HPC Workshop: Parallel Programming. Alexander B. Pacheco Introduction to MPI 2018 HPC Workshop: Parallel Programming Alexander B. Pacheco Research Computing July 17-18, 2018 Distributed Memory Model Each process has its own address space Data is local to each

More information

Scientific Computing

Scientific Computing Lecture on Scientific Computing Dr. Kersten Schmidt Lecture 21 Technische Universität Berlin Institut für Mathematik Wintersemester 2014/2015 Syllabus Linear Regression, Fast Fourier transform Modelling

More information

Parallel Programming in C with MPI and OpenMP

Parallel Programming in C with MPI and OpenMP Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 4 Message-Passing Programming Learning Objectives n Understanding how MPI programs execute n Familiarity with fundamental MPI functions

More information

Message Passing Interface

Message Passing Interface Message Passing Interface DPHPC15 TA: Salvatore Di Girolamo DSM (Distributed Shared Memory) Message Passing MPI (Message Passing Interface) A message passing specification implemented

More information

High performance computing. Message Passing Interface

High performance computing. Message Passing Interface High performance computing Message Passing Interface send-receive paradigm sending the message: send (target, id, data) receiving the message: receive (source, id, data) Versatility of the model High efficiency

More information

15-440: Recitation 8

15-440: Recitation 8 15-440: Recitation 8 School of Computer Science Carnegie Mellon University, Qatar Fall 2013 Date: Oct 31, 2013 I- Intended Learning Outcome (ILO): The ILO of this recitation is: Apply parallel programs

More information

ME964 High Performance Computing for Engineering Applications

ME964 High Performance Computing for Engineering Applications ME964 High Performance Computing for Engineering Applications Parallel Computing with MPI Building/Debugging MPI Executables MPI Send/Receive Collective Communications with MPI April 10, 2012 Dan Negrut,

More information

Anomalies. The following issues might make the performance of a parallel program look different than it its:

Anomalies. The following issues might make the performance of a parallel program look different than it its: Anomalies The following issues might make the performance of a parallel program look different than it its: When running a program in parallel on many processors, each processor has its own cache, so the

More information

MPI 1. CSCI 4850/5850 High-Performance Computing Spring 2018

MPI 1. CSCI 4850/5850 High-Performance Computing Spring 2018 MPI 1 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

MPI Runtime Error Detection with MUST

MPI Runtime Error Detection with MUST MPI Runtime Error Detection with MUST At the 27th VI-HPS Tuning Workshop Joachim Protze IT Center RWTH Aachen University April 2018 How many issues can you spot in this tiny example? #include #include

More information

What is Hadoop? Hadoop is an ecosystem of tools for processing Big Data. Hadoop is an open source project.

What is Hadoop? Hadoop is an ecosystem of tools for processing Big Data. Hadoop is an open source project. Back to Hadoop 1 What is Hadoop? Hadoop is an ecosystem of tools for processing Big Data. Hadoop is an open source project. 2 A family of tools MapReduce HDFS HBase Hive Pig ZooKeeper Avro Sqoop Oozie

More information

DISTRIBUTED MEMORY PROGRAMMING WITH MPI. Carlos Jaime Barrios Hernández, PhD.

DISTRIBUTED MEMORY PROGRAMMING WITH MPI. Carlos Jaime Barrios Hernández, PhD. DISTRIBUTED MEMORY PROGRAMMING WITH MPI Carlos Jaime Barrios Hernández, PhD. Remember Special Features of Architecture Remember concurrency : it exploits better the resources (shared) within a computer.

More information

Parallel Programming in C with MPI and OpenMP

Parallel Programming in C with MPI and OpenMP Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 4 Message-Passing Programming Learning Objectives Understanding how MPI programs execute Familiarity with fundamental MPI functions

More information

Message Passing Interface - MPI

Message Passing Interface - MPI Message Passing Interface - MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico October 24, 2011 Many slides adapted from lectures by

More information

Parallel Short Course. Distributed memory machines

Parallel Short Course. Distributed memory machines Parallel Short Course Message Passing Interface (MPI ) I Introduction and Point-to-point operations Spring 2007 Distributed memory machines local disks Memory Network card 1 Compute node message passing

More information

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing

The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing The Message Passing Interface (MPI) TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Parallelism Decompose the execution into several tasks according to the work to be done: Function/Task

More information

Practical Course Scientific Computing and Visualization

Practical Course Scientific Computing and Visualization July 5, 2006 Page 1 of 21 1. Parallelization Architecture our target architecture: MIMD distributed address space machines program1 data1 program2 data2 program program3 data data3.. program(data) program1(data1)

More information

Supercomputing in Plain English Exercise #6: MPI Point to Point

Supercomputing in Plain English Exercise #6: MPI Point to Point Supercomputing in Plain English Exercise #6: MPI Point to Point In this exercise, we ll use the same conventions and commands as in Exercises #1, #2, #3, #4 and #5. You should refer back to the Exercise

More information

Part One: The Files. C MPI Slurm Tutorial - TSP. Introduction. TSP Problem and Tutorial s Purpose. tsp.tar. The C files, summary

Part One: The Files. C MPI Slurm Tutorial - TSP. Introduction. TSP Problem and Tutorial s Purpose. tsp.tar. The C files, summary C MPI Slurm Tutorial - TSP Introduction The example shown here demonstrates the use of the Slurm Scheduler for the purpose of running a C/MPI program Knowledge of C is assumed Code is also given for the

More information

MPI MESSAGE PASSING INTERFACE

MPI MESSAGE PASSING INTERFACE MPI MESSAGE PASSING INTERFACE David COLIGNON, ULiège CÉCI - Consortium des Équipements de Calcul Intensif http://www.ceci-hpc.be Outline Introduction From serial source code to parallel execution MPI functions

More information

COMP/CS 605: Introduction to Parallel Computing Topic : Distributed Memory Programming: Message Passing Interface

COMP/CS 605: Introduction to Parallel Computing Topic : Distributed Memory Programming: Message Passing Interface COMP/CS 605: Introduction to Parallel Computing Topic : Distributed Memory Programming: Message Passing Interface Mary Thomas Department of Computer Science Computational Science Research Center (CSRC)

More information

Parallel Computing: Overview

Parallel Computing: Overview Parallel Computing: Overview Jemmy Hu SHARCNET University of Waterloo March 1, 2007 Contents What is Parallel Computing? Why use Parallel Computing? Flynn's Classical Taxonomy Parallel Computer Memory

More information

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam

Programming Scalable Systems with MPI. UvA / SURFsara High Performance Computing and Big Data. Clemens Grelck, University of Amsterdam Clemens Grelck University of Amsterdam UvA / SURFsara High Performance Computing and Big Data Message Passing as a Programming Paradigm Gentle Introduction to MPI Point-to-point Communication Message Passing

More information

CMSC 714 Lecture 3 Message Passing with PVM and MPI

CMSC 714 Lecture 3 Message Passing with PVM and MPI CMSC 714 Lecture 3 Message Passing with PVM and MPI Alan Sussman PVM Provide a simple, free, portable parallel environment Run on everything Parallel Hardware: SMP, MPPs, Vector Machines Network of Workstations:

More information

CSE 160 Lecture 18. Message Passing

CSE 160 Lecture 18. Message Passing CSE 160 Lecture 18 Message Passing Question 4c % Serial Loop: for i = 1:n/3-1 x(2*i) = x(3*i); % Restructured for Parallelism (CORRECT) for i = 1:3:n/3-1 y(2*i) = y(3*i); for i = 2:3:n/3-1 y(2*i) = y(3*i);

More information

Message Passing Interface - MPI

Message Passing Interface - MPI Message Passing Interface - MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico March 31, 2016 Many slides adapted from lectures by Bill

More information

Computer Architecture

Computer Architecture Jens Teubner Computer Architecture Summer 2016 1 Computer Architecture Jens Teubner, TU Dortmund jens.teubner@cs.tu-dortmund.de Summer 2016 Jens Teubner Computer Architecture Summer 2016 2 Part I Programming

More information

JURECA Tuning for the platform

JURECA Tuning for the platform JURECA Tuning for the platform Usage of ParaStation MPI 2017-11-23 Outline ParaStation MPI Compiling your program Running your program Tuning parameters Resources 2 ParaStation MPI Based on MPICH (3.2)

More information

Distributed Systems + Middleware Advanced Message Passing with MPI

Distributed Systems + Middleware Advanced Message Passing with MPI Distributed Systems + Middleware Advanced Message Passing with MPI Gianpaolo Cugola Dipartimento di Elettronica e Informazione Politecnico, Italy cugola@elet.polimi.it http://home.dei.polimi.it/cugola

More information

More about MPI programming. More about MPI programming p. 1

More about MPI programming. More about MPI programming p. 1 More about MPI programming More about MPI programming p. 1 Some recaps (1) One way of categorizing parallel computers is by looking at the memory configuration: In shared-memory systems, the CPUs share

More information

Parallel Programming Using MPI

Parallel Programming Using MPI Parallel Programming Using MPI Prof. Hank Dietz KAOS Seminar, February 8, 2012 University of Kentucky Electrical & Computer Engineering Parallel Processing Process N pieces simultaneously, get up to a

More information

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 Message passing vs. Shared memory Client Client Client Client send(msg) recv(msg) send(msg) recv(msg) MSG MSG MSG IPC Shared

More information

MPI Tutorial. Shao-Ching Huang. High Performance Computing Group UCLA Institute for Digital Research and Education

MPI Tutorial. Shao-Ching Huang. High Performance Computing Group UCLA Institute for Digital Research and Education MPI Tutorial Shao-Ching Huang High Performance Computing Group UCLA Institute for Digital Research and Education Center for Vision, Cognition, Learning and Art, UCLA July 15 22, 2013 A few words before

More information

Message Passing Interface. most of the slides taken from Hanjun Kim

Message Passing Interface. most of the slides taken from Hanjun Kim Message Passing Interface most of the slides taken from Hanjun Kim Message Passing Pros Scalable, Flexible Cons Someone says it s more difficult than DSM MPI (Message Passing Interface) A standard message

More information

Parallel Applications Design with MPI

Parallel Applications Design with MPI Parallel Applications Design with MPI Killer applications Science Research Challanges Challenging use of computer power and storage Who might be interested in those applications? Simulation and analysis

More information