Supporting Information Tom Brown Supervisors: Dr Andrew Angel, Prof. Jane Mellor Project 1

Size: px
Start display at page:

Download "Supporting Information Tom Brown Supervisors: Dr Andrew Angel, Prof. Jane Mellor Project 1"

Transcription

1 Supporting Information Tom Brown Supervisors: Dr Andrew Angel, Prof. Jane Mellor Project 1 Steady State Test To determine after how many time steps the simulated system reached a steady state, the system was run for a varying number of time steps and compared to a long-run simulation. Under a steady state assumption, the distribution of nuclear and cytoplasmic transcripts would not change if any more time steps were simulated. After 5 time steps the system produced distributions that were identical to a the system after having undergone 1, time steps. Therefore it was assumed that the system had reached a steady state after 5 time steps and this number was used for further simulations (Fig. S1)..4 5 iterations - nuclear data.15 5 iterations - cytoplasmic data iterations - nuclear data iterations - cytoplasmic data iterations - nuclear data iterations - cytoplasmic data iterations - nuclear data iterations - cytoplasmic data Fig. S 1: A test to determine when the simulated system reached a steady state. Shown are the simulated distributions of nuclear and cytoplasmic transcripts. After 5 and 1 iterations the system has not yet reached a steady state, whereas after 5 iterations the system can be assumed to be in a steady state. This was further tested by comparison to a long-run simulation with 1, simulations. Unweighted Metric To determine whether the changes in parameter values described were consistent when using a different fitting metric, MCMC parameter search simulations were repeated with an unweighted χ 2 statistic. For O i observed counts and E i 1

2 expected counts from the simulated distributions, the χ 2 statistic used was given by: N (O i E i ) 2 i= wheren isthetotalnumberofnuclearandcytoplasmicdatapointswith5ormorecounts(cf. Searching the parameter space - main document). This statistic was used to see which distributions fit the data points in the nucleus and cytoplasm where each data point was equally weighted, rather than the nuclear and cytoplasmic data as a whole having equal weight. The fits obtained using this metric show better fits to the cytoplasmic data as there are consistently many more data points with 5 or more counts in the cytoplasm Fig. S2. Given the poor fits that are then obtained in the nucleus, it is unsurprising that the relationships between the initiation rate and the on frequency is not as clear with this metric (Fig. S4) and the similarly for the nuclear rates (Fig. S3B). However, one can still see a separation between the gradient of the ADH4 and SH9 transcription initiation frequencies (Fig. S3A) with the ADH4 strain having a higher transcription initiation frequency than the antisense-less SH9 strain. The change in nuclear rates is also repeated with this new metric (Fig. S3C) with nuclear transcripts spending less tie in the nucleus in the presence of antisense transcription (ADH4) compared to the strain without antisense transcription (SH9). E i Real Data - Nucleus Simulated Data - Nucleus Real Data - Nucleus Simulated Data - Nucleus Real Data - Cytoplasm Simulated Data - Cytoplasm Real Data - Cytoplasm Simulated Data - Cytoplasm A B Fig. S 2: Fits to the nuclear and cytoplasmic data obtained via MCMC parameter space searches using an unweighted χ 2 statistic. Fits here are to the GAL1 ADH4 (A) and SH9, antisense-less, (B) data. 2

3 Initiation (min -1 ) Transcription initiation frequencies ADH4 SH /On A Elong x Exp (min -2 ) ADH4 SH9 Nuclear rates Elong + Exp (min -1 ) B Transcript initiation frequency (min -1 ) Transcription initiation and nuclear rates ADH4 SH Elong x Exp / (Elong + Exp) (min -1 ) C Fig. S 3: Detected changes in the transcription initiation frequencies ( ) (A) and time transcripts spent in the nucleus (B & C) between the two strains. A shows the increase in the ratio:, the transcription initiation frequency in the presence of init on on+off antisense transcription (ADH4), demonstrated by the increase in gradient of the determined parameter values. B & C show the increase in nuclear rate, the rate at which nuclear transcripts go from starting transcription to being exported, in the presence of antisense transcription (ADH4). 3 ADH4 transcription initiation, mode: SH9 transcription initiation, mode: Transcription Initiation Rate (min -1 ) Transcription Initiation Rate (min -1 ) A ( Fig. S 4: Histograms of the determined values of the transcription initiation rate, given by: B init on on+off ). The transcription initiation frequency of the antisense-less strain, SH9, was found to be lower (B) than the strain with antisense transcription, ADH4 (A). 3

4 Selected Code MCMC parameter search #include <stdio.h> #include <stdlib.h> #include <math.h> #include <mpi.h> #include <time.h> #define PI #define int_numtimesteps 5 #define int_numiterations 3 // This will be multiplied by the number of worker processors #define flt_standdev.2 #define flt_degrate.45 // Rate at which cytoplasmic transcripts are degraded #define int_numparametersteps 1 // READ ME!!!! // To run this code, compile as follows: // > mpicc -lm -O3 complete_systemsearch.c -o complete_systemsearch // > mpirun -np # complete_systemsearch.exe <Nucleus data file> <Cytoplasm Data file> <Output data filename> // // The script is set up with one root node distributing simulations to other worker // nodes after randomly sampling new parameters to be tested. The script as it // currently stands will not work if run on one processor. // Main function int main(int argc,char* argv[]){ if (argc < 3){ printf("incorrect number of input arguments (Pass file names (nucleus_data, cytoplasm_data, output_file) to function on command line)\n"); abort(); // Initialise the MPI environment int rank, worker_rank, n_procs; MPI_Init (&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD, &rank); MPI_Comm_size(MPI_COMM_WORLD, &n_procs); MPI_Status status; // Open file to write parameters to FILE *fp; fp = fopen(argv[3],"w"); fprintf(fp,"iteration_no\tchi_square\ton_rate\toff_rate\tinitiation_rate\telongation_rate\texport_rate\tdegradation_rate\n"); int int_rootfinished,int_workerfinished; int i,j,k; int int_currentiteration; double flt_newmeasure; // These values will assess the fit of the parameters double flt_currentmeasure = 1; // Array for storing histogram of results double fltarr_localnucresults[5], fltarr_rootnucresults[5]; double fltarr_localcytresults[5], fltarr_rootcytresults[5]; double fltarr_globalnucresults[5]; double fltarr_globalcytresults[5]; int int_nucleusresult; int int_cytoplasmresult; int int_currenttimestep; // Concentrations are as follows: // : Gene promoter state, = off, 1 = on // 1: Partial transcript undergoing transcription 4

5 // 2: Complete transcript in nucleus // 3: Complete transcript in cytoplasm int intarr_currentconcentrations[4]; int intarr_newconcentrations[4]; double flt_currenttime; double flt_newtime; int intarr_conc[4]; double flt_time; double fltarr_propensities[6]; double flt_totalpropensity; double fltarr_cumulpropensities[6]; double flt_timerand; double flt_tau; double flt_proposedtime; double flt_rand2; double flt_onrate, flt_offrate; // Rate at which gene moves between off and on transcription state double flt_initiationrate; // Rate at which transcription is initiated double flt_elongationrate; // Rate at which transcription elongation takes place, determining time for complete RNA molecule to be transcribed double flt_exportrate; // Rate at which transcripts are exported from the nucleus double flt_mcmcurand1,flt_mcmcurand2,flt_mcmcurand3,flt_mcmcurand4,flt_mcmcurand5,flt_mcmcurand6; double flt_mcmcnrand1,flt_mcmcnrand2,flt_mcmcnrand3,flt_mcmcnrand4,flt_mcmcnrand5,flt_mcmcnrand6; double flt_comparerand; double flt_proposedon; double flt_proposedoff; double flt_proposedinit; double flt_proposedelong; double flt_proposedexp; double fltarr_proposedparameters[6]; double fltarr_currentparameters[6]; double flt_totalnuc; double fltarr_normnucresults[5]; double fltarr_cytresults[5]; double flt_totalcyt; double fltarr_normcytresults[5]; int int_currentparameterstep; double flt_chisq; double fltarr_cumulnucdata[5]; double fltarr_cumulcytodata[5]; double fltarr_cumulnucresults[5]; double fltarr_cumulcytoresults[5]; // Seed random numbers to be different for each processor srand(time(null)+rank); // Read nucleus and cytoplasm transcript data from file double fltarr_nucleusdata[5],fltarr_cytodata[5]; double flt_totalnucdata = ; double flt_totalcytdata = ; double flt_nucbins =, flt_cytbins = ; int int_fileline; char chararr_nuc[128],chararr_cyt[128]; int int_linelength = 128; FILE *nucleus_file, *cyto_file; nucleus_file = fopen(argv[1],"r"); cyto_file = fopen(argv[2],"r"); for (int_fileline=;int_fileline<5;int_fileline++){ fgets(chararr_nuc, int_linelength, nucleus_file); fltarr_nucleusdata[int_fileline] = atof(chararr_nuc); fgets(chararr_cyt, int_linelength, cyto_file); fltarr_cytodata[int_fileline] = atof(chararr_cyt); flt_totalnucdata += fltarr_nucleusdata[int_fileline]; flt_totalcytdata += fltarr_cytodata[int_fileline]; // Count number of bins included in Chi-Square statistic to perform the weighting if (fltarr_nucleusdata[int_fileline] >= 5){ flt_nucbins ++; if (fltarr_cytodata[int_fileline] >= 5){ flt_cytbins ++; 5

6 // Set the initial values of the constant parameters all in units of /min flt_onrate =.5; flt_offrate =.5; flt_initiationrate =.5; flt_elongationrate =.5; flt_exportrate =.5; for (int_currentparameterstep=;int_currentparameterstep<int_numparametersteps;int_currentparameterstep++){ if (rank == ){ //Do root tasks for (j=;j<5;j++){ fltarr_globalnucresults[j] = ; fltarr_globalcytresults[j] = ; // Simulate 6 standard normal random numbers via Box-Muller transform flt_mcmcurand1 = (double)rand()/(rand_max); flt_mcmcurand2 = (double)rand()/(rand_max); flt_mcmcurand3 = (double)rand()/(rand_max); flt_mcmcurand4 = (double)rand()/(rand_max); flt_mcmcurand5 = (double)rand()/(rand_max); flt_mcmcurand6 = (double)rand()/(rand_max); flt_mcmcnrand1 = sqrt(-2*log(flt_mcmcurand1))*cos(2*pi*flt_mcmcurand2); flt_mcmcnrand2 = sqrt(-2*log(flt_mcmcurand2))*sin(2*pi*flt_mcmcurand1); flt_mcmcnrand3 = sqrt(-2*log(flt_mcmcurand3))*cos(2*pi*flt_mcmcurand4); flt_mcmcnrand4 = sqrt(-2*log(flt_mcmcurand4))*sin(2*pi*flt_mcmcurand3); flt_mcmcnrand5 = sqrt(-2*log(flt_mcmcurand5))*cos(2*pi*flt_mcmcurand6); // Use normal random numbers to sample new parameters flt_proposedon = fabs(flt_onrate + flt_mcmcnrand1 * flt_standdev); flt_proposedoff = fabs(flt_offrate + flt_mcmcnrand2 * flt_standdev); flt_proposedinit = fabs(flt_initiationrate + flt_mcmcnrand3 * flt_standdev); flt_proposedelong = fabs(flt_elongationrate + flt_mcmcnrand4 * flt_standdev); flt_proposedexp = fabs(flt_exportrate + flt_mcmcnrand5 * flt_standdev); // Prepare array to be passed to worker processors fltarr_proposedparameters[] = flt_proposedon; fltarr_proposedparameters[1] = flt_proposedoff; fltarr_proposedparameters[2] = flt_proposedinit; fltarr_proposedparameters[3] = flt_proposedelong; fltarr_proposedparameters[4] = flt_proposedexp; fltarr_proposedparameters[5] = flt_degrate; for (worker_rank = 1;worker_rank<n_procs;worker_rank++){ // Send proposed parameters to the worker processors MPI_Send(fltarr_proposedParameters,6,MPI_DOUBLE,worker_rank,,MPI_COMM_WORLD); for (worker_rank = 1;worker_rank<n_procs;worker_rank++){ for (j=;j<5;j++){ fltarr_rootnucresults[j] = ; fltarr_rootcytresults[j] = ; // Receive the simulated nuclear and cytoplasmic data from the worker processors MPI_Recv(fltarr_localNucResults,5,MPI_DOUBLE,worker_rank,1,MPI_COMM_WORLD,&status); MPI_Recv(fltarr_localCytResults,5,MPI_DOUBLE,worker_rank,2,MPI_COMM_WORLD,&status); for (j=;j<5;j++){ fltarr_globalnucresults[j] += fltarr_localnucresults[j]; fltarr_globalcytresults[j] += fltarr_localcytresults[j]; flt_totalnuc = ; flt_totalcyt = ; // Normalise the values in the results to give frequency for (j=;j<5;j++){ flt_totalnuc += fltarr_globalnucresults[j]; flt_totalcyt += fltarr_globalcytresults[j]; // Ensure simulated data is on the same scale as the experimental data for (j=;j<5;j++){ fltarr_normnucresults[j] = flt_totalnucdata * (fltarr_globalnucresults[j]/flt_totalnuc); fltarr_normcytresults[j] = flt_totalcytdata * (fltarr_globalcytresults[j]/flt_totalcyt); // If simulated data give transcript values that are too high, there will be no // results less than 5, so will divide by zero causing NaN. Make the Chi-Square 6

7 // value incredibly large so that these parameters will not be accepted for (j=;j<5;j++){ if (isnan(fltarr_normnucresults[j])){ fltarr_normnucresults[j] = ; else if (isnan(fltarr_normcytresults[j])){ fltarr_normcytresults[j] = ; // Calculate un-weighted Chi-Squared statistic, ignoring data with fewer than 5 experimental results flt_chisq = ; for (j = ;j<5;j++){ if ((fltarr_nucleusdata[j] >= 5)){ if (fltarr_normnucresults[j] == ){ flt_chisq += 1; else { flt_chisq += ((fltarr_normnucresults[j]-fltarr_nucleusdata[j]) * (fltarr_normnucresults[j]-fltarr_nucleusdata[j]))/(fltarr_normnucresults[j if ((fltarr_cytodata[j] >= 5)){ if (fltarr_normcytresults[j] == ){ flt_chisq += 1; else { flt_chisq +=((fltarr_normcytresults[j]-fltarr_cytodata[j]) * (fltarr_normcytresults[j]-fltarr_cytodata[j]))/(fltarr_normcytresults[j]); if (flt_chisq == ){ flt_chisq = 1; flt_newmeasure = flt_chisq; // Compare the measures from the previous parameters with the proposed parameters and accept/reject flt_comparerand = (double)rand()/(rand_max); if ((.5 + (.5 * flt_comparerand)) < (flt_currentmeasure/flt_newmeasure)){ // Accept new parameters flt_onrate = flt_proposedon; flt_offrate = flt_proposedoff; flt_initiationrate = flt_proposedinit; flt_elongationrate = flt_proposedelong; flt_exportrate = flt_proposedexp; flt_currentmeasure = flt_newmeasure; // Print chi-square every 1 parameter jumps to monitor progress if (int_currentparameterstep%1 == ){ printf("current iteration: %d, Current rank: %f\n",int_currentparameterstep,flt_currentmeasure); // Write the parameters from this timestep to file fprintf(fp,"%d\t%.1lf\t%.1f\t%.1f\t%.1f\t%.1f\t%.1f\t%.1f\n",int_currentparameterstep,flt_currentmeasure,flt_onrate,flt_offrate,flt_initia else if (rank > ) { // Do worker tasks for (j=;j<5;j++){ fltarr_localnucresults[j] = ; fltarr_localcytresults[j] = ; fltarr_globalnucresults[j] = ; fltarr_globalcytresults[j] = ; //Receive proposed parameter values from the root processor MPI_Recv(fltarr_currentParameters,6,MPI_DOUBLE,,,MPI_COMM_WORLD,&status); // Run the Gillespie simulation multiple times for (int_currentiteration=;int_currentiteration<int_numiterations;int_currentiteration++){ for (k=;k<4;k++){ intarr_newconcentrations[k] = ; flt_newtime = ; flt_currenttime = ; // Run the single timestep multiple times to perform the Gillespie simulation for (int_currenttimestep=;int_currenttimestep<int_numtimesteps;int_currenttimestep++){ for (k=;k<4;k++){ intarr_currentconcentrations[k] = intarr_newconcentrations[k]; 7

8 //flt_currenttime = flt_newtime; // Calculate the propensities for the current reaction // Promoter site switches from off to on configuration fltarr_propensities[] = fltarr_currentparameters[]*(1-intarr_currentconcentrations[]); // Promoter site switches from on to off configuration fltarr_propensities[1] = fltarr_currentparameters[1]*intarr_currentconcentrations[]; // Gene begins to be transcribed fltarr_propensities[2] = fltarr_currentparameters[2]*intarr_currentconcentrations[]; // Partial transcript is given a finishing time for transcription fltarr_propensities[3] = fltarr_currentparameters[3]*intarr_currentconcentrations[1]; // Transcript is exported from the nucleus to the cytoplasm fltarr_propensities[4] = fltarr_currentparameters[4]*intarr_currentconcentrations[2]; // Transcript is degraded in the cytoplasm fltarr_propensities[5] = fltarr_currentparameters[5]*intarr_currentconcentrations[3]; // Calculate cumulatve propensity array flt_totalpropensity = ; for (i=;i<6;i++){ flt_totalpropensity += fltarr_propensities[i]; fltarr_cumulpropensities[] = fltarr_propensities[]/flt_totalpropensity; for (i=1;i<6;i++){ fltarr_cumulpropensities[i] = fltarr_cumulpropensities[i-1] + fltarr_propensities[i]/flt_totalpropensity; flt_rand2 = (double)rand()/(rand_max); // Establish which reaction occurred and update the current concentrations if (flt_rand2 <= fltarr_cumulpropensities[]){ // Promoter changes from off to on configuration intarr_newconcentrations[] = intarr_currentconcentrations[]+1; intarr_newconcentrations[1] = intarr_currentconcentrations[1]; intarr_newconcentrations[2] = intarr_currentconcentrations[2]; intarr_newconcentrations[3] = intarr_currentconcentrations[3]; else if (flt_rand2 <= fltarr_cumulpropensities[1]){ // Promoter changes from on to off configuration intarr_newconcentrations[] = intarr_currentconcentrations[]-1; intarr_newconcentrations[1] = intarr_currentconcentrations[1]; intarr_newconcentrations[2] = intarr_currentconcentrations[2]; intarr_newconcentrations[3] = intarr_currentconcentrations[3]; else if (flt_rand2 <= fltarr_cumulpropensities[2]){ intarr_newconcentrations[] = intarr_currentconcentrations[]; // New partial transcript begins the transcription process intarr_newconcentrations[1] = intarr_currentconcentrations[1]+1; intarr_newconcentrations[2] = intarr_currentconcentrations[2]; intarr_newconcentrations[3] = intarr_currentconcentrations[3]; else if (flt_rand2 <= fltarr_cumulpropensities[3]){ intarr_newconcentrations[] = intarr_currentconcentrations[]; // Partial transcript finishes transcribing intarr_newconcentrations[1] = intarr_currentconcentrations[1]-1; // Completed transcript enters the nucleus intarr_newconcentrations[2] = intarr_currentconcentrations[2]+1; intarr_newconcentrations[3] = intarr_currentconcentrations[3]; else if (flt_rand2 <= fltarr_cumulpropensities[4]){ intarr_newconcentrations[] = intarr_currentconcentrations[]; intarr_newconcentrations[1] = intarr_currentconcentrations[1]; // Transcript is exported from the nucleus intarr_newconcentrations[2] = intarr_currentconcentrations[2]-1; // Transcript enters the cytoplasm intarr_newconcentrations[3] = intarr_currentconcentrations[3]+1; else if (flt_rand2 <= fltarr_cumulpropensities[5]){ intarr_newconcentrations[] = intarr_currentconcentrations[]; intarr_newconcentrations[1] = intarr_currentconcentrations[1]; intarr_newconcentrations[2] = intarr_currentconcentrations[2]; // Transcript is degraded in the cytoplasm intarr_newconcentrations[3] = intarr_currentconcentrations[3]-1; else { 8

9 // Occasional rounding error produces cumulative propensities that do not add up to 1 // In these cases, no reaction occurs. Go back to original time printf("error, no reaction final propensity: %.2lf random number: %.2lf\n",fltarr_cumulPropensities[5],flt_rand2); intarr_newconcentrations[] = intarr_currentconcentrations[]; intarr_newconcentrations[1] = intarr_currentconcentrations[1]; intarr_newconcentrations[2] = intarr_currentconcentrations[2]; intarr_newconcentrations[3] = intarr_currentconcentrations[3]; //flt_newtime = flt_currenttime; // Count any partially formed or complete transcript in the final result int_nucleusresult = intarr_newconcentrations[1] + intarr_newconcentrations[2]; int_cytoplasmresult = intarr_newconcentrations[3]; // Update the result histogram if (int_nucleusresult < 5){ fltarr_localnucresults[int_nucleusresult]++; if (int_nucleusresult < 5){ fltarr_localcytresults[int_cytoplasmresult]++; // Send simulated results back to root processor MPI_Send(fltarr_localNucResults,5,MPI_DOUBLE,,1,MPI_COMM_WORLD); MPI_Send(fltarr_localCytResults,5,MPI_DOUBLE,,2,MPI_COMM_WORLD); if (rank == ){ fclose(fp); printf("file closed\n"); int_rootfinished = 1; for (j=1;j<n_procs;j++){ MPI_Send(&int_rootFinished,1,MPI_INT,j,3,MPI_COMM_WORLD); int MPI_Finalize (); else{ int_workerfinished = ; // Ensure process finalizes after the file has been completely written to avoid file corruption MPI_Recv(&int_workerFinished,1,MPI_INT,,3,MPI_COMM_WORLD,&status); int MPI_Finalize (); Degradation rate simulations #include <stdio.h> #include <stdlib.h> #include <math.h> #include <time.h> #include "omp.h" // READ ME!! // This file is to be compiled and run as follows: // gcc -fopenmp -lm -O3 calculate_degrate.c -o calculate_degrate // //./calculate_degrate // // // The names of the data and results files are included in the code. Ensure this // script is run in the same directory as the relevant data files void main(int argc, char* argv[]){ // Extract Cytoplasmic data from file FILE *ADH4_7minCytFile; 9

10 FILE *ADH4_15minCytFile; FILE *SH9_7minCytFile; FILE *SH9_15minCytFile; ADH4_7minCytFile = fopen("14514_adh4_7mindch2cyto.txt","r"); ADH4_15minCytFile = fopen("14514_adh4_15mindch2cyto.txt","r"); SH9_7minCytFile = fopen("14514_sh9_7mindch2cyto.txt","r"); SH9_15minCytFile = fopen("14514_sh9_15mindch2cyto.txt","r"); double ADH4_7minCytData[5], ADH4_15minCytData[5]; double SH9_7minCytData[5], SH9_15minCytData[5]; //Read raw transcript data from file char chararr_buffer[128]; int int_linelength = 128; int int_currentline; for (int_currentline=;int_currentline<5;int_currentline++){ fgets(chararr_buffer,int_linelength,adh4_7mincytfile); ADH4_7minCytData[int_currentLine] = atof(chararr_buffer); for (int_currentline=;int_currentline<5;int_currentline++){ fgets(chararr_buffer,int_linelength,adh4_15mincytfile); ADH4_15minCytData[int_currentLine] = atof(chararr_buffer); for (int_currentline=;int_currentline<5;int_currentline++){ fgets(chararr_buffer,int_linelength,sh9_7mincytfile); SH9_7minCytData[int_currentLine] = atof(chararr_buffer); for (int_currentline=;int_currentline<5;int_currentline++){ fgets(chararr_buffer,int_linelength,sh9_15mincytfile); SH9_15minCytData[int_currentLine] = atof(chararr_buffer); fclose(adh4_7mincytfile); fclose(adh4_15mincytfile); fclose(sh9_7mincytfile); fclose(sh9_15mincytfile); //Normalise the 15 minute data to the 7 minute data double total_adh47min, total_adh415min, total_sh97min, total_sh915min; int i; total_adh47min = ; total_adh415min = ; total_sh97min = ; total_sh915min = ; for (i=;i<5;i++){ total_adh47min += ADH4_7minCytData[i]; total_adh415min += ADH4_15minCytData[i]; total_sh97min += SH9_7minCytData[i]; total_sh915min += SH9_15minCytData[i]; for (i=;i<5;i++){ ADH4_15minCytData[i] = floor(((adh4_15mincytdata[i] * total_adh47min)/total_adh415min) +.5); SH9_15minCytData[i] = floor(((sh9_15mincytdata[i] * total_sh97min)/total_adh415min) +.5); FILE *ADH4degOutput_file; FILE *SH9degOutput_file; ADH4degOutput_file = fopen("adh4gal1deg_rates.txt","w"); SH9degOutput_file = fopen("sh9gal1deg_rates.txt","w"); int iteration_no; double deg_rate, exp_rate; int bin, cell; double ADH4_cytTranscripts7min, ADH4_cytTranscripts15min; double SH9_cytTranscripts7min, SH9_cytTranscripts15min; // Calculate total number of transcripts in the nuclei and cytoplasms of all cells for (bin=1;bin<49;bin++){ ADH4_cytTranscripts7min += bin*adh4_7mincytdata[bin]; ADH4_cytTranscripts15min += bin*adh4_15mincytdata[bin]; SH9_cytTranscripts7min += bin*sh9_7mincytdata[bin]; 1

11 SH9_cytTranscripts15min += bin*sh9_15mincytdata[bin]; double deg_propensity; double current_time, tau, time_rand; double current_cyttranscripts; int hist_bin; double ADH4_degHistogram[1]; //Simulate the cytoplasmic degradation for all cells whilst varying the degradation rates #pragma omp parallel default(none) private (i, hist_bin, time_rand, iteration_no, deg_rate, current_cyttranscripts, tau, current_time, deg_propensity { srand((int)time(null) + omp_get_thread_num()); #pragma omp for for (iteration_no=;iteration_no<2;iteration_no++){ for (deg_rate=.1;deg_rate<.2;deg_rate+=.1){ current_time = ; current_cyttranscripts = ADH4_cytTranscripts7min; //Simulate 8 minutes of reactions while (current_time < 8){ // Calculate the propensities of a reaction occurring deg_propensity = deg_rate*current_cyttranscripts; time_rand = (double)rand()/(rand_max); tau = (1/(deg_propensity))*log(1/time_rand); current_time += tau; current_cyttranscripts = current_cyttranscripts - 1; if (current_cyttranscripts == ){ break; //If end results match the 15 minute data, record the degradation and export rates if (current_cyttranscripts == ADH4_cytTranscripts15min){ #pragma omp critical { fprintf(adh4degoutput_file,"%f\n",deg_rate); //Group rates into 1 bins hist_bin = (int)((deg_rate - fmod(deg_rate,.2))/.2); ADH4_degHistogram[hist_bin] ++; if (iteration_no%25 == ){ printf("%d ADH4 iterations completed\n",iteration_no); #pragma omp barrier printf("adh4 simulations completed\n"); fclose(adh4degoutput_file); //Repeat the process for the SH9 data double SH9_degHistogram[1]; for (i=;i<1;i++){ SH9_degHistogram[i] = ; #pragma omp parallel default(none) private (i, hist_bin, time_rand, iteration_no, deg_rate, current_cyttranscripts, tau, current_time, deg_propensity { srand((int)time(null) + omp_get_thread_num()); #pragma omp for for (iteration_no=;iteration_no<2;iteration_no++){ for (deg_rate=.1;deg_rate<.2;deg_rate+=.1){ current_time = ; current_cyttranscripts = SH9_cytTranscripts7min; //Simulate 8 minutes of reactions while (current_time < 8){ // Calculate the propensities of a reaction occurring 11

12 deg_propensity = deg_rate*current_cyttranscripts; time_rand = (double)rand()/(rand_max); tau = (1/(deg_propensity))*log(1/time_rand); current_time += tau; current_cyttranscripts = current_cyttranscripts - 1; if (current_cyttranscripts == ){ break; //If end results match the 15 minute data, record the degradation and export rates if (current_cyttranscripts == SH9_cytTranscripts15min){ #pragma omp critical { fprintf(sh9degoutput_file,"%f\n",deg_rate); //Group rates into 1 bins hist_bin = (int)((deg_rate - fmod(deg_rate,.2))/.2); SH9_degHistogram[hist_bin] ++; if (iteration_no%25 == ){ printf("%d SH9 iterations completed\n",iteration_no); #pragma omp barrier printf("sh9 simulations completed\n"); fclose(sh9degoutput_file); //Calculate total number of values in the rates histograms to normalise the beta distributions double total_adh4degtranscripts; double total_sh9degtranscripts; for (i=;i<1;i++){ if (!isnan(adh4_deghistogram[i])){ total_adh4degtranscripts += ADH4_degHistogram[i]; if (!isnan(sh9_deghistogram[i])){ total_sh9degtranscripts += SH9_degHistogram[i]; printf("fitting beta distributions...\n"); //Fit beta distributions for varying values of alpha and beta using a chi-square statistic double global_adh4degchisq, global_sh9degchisq; double local_adh4degchisq, local_sh9degchisq; double global_adh4degalpha, global_adh4degbeta; double global_sh9degalpha, global_sh9degbeta; double beta_values[1], norm_betavalues[1]; double total_betavalues; double alpha, beta, x; FILE *ADH4_ratesFile; FILE *SH9_ratesFile; ADH4_ratesFile = fopen("adh4gal1beta_rates.txt","w"); SH9_ratesFile = fopen("sh9gal1beta_rates.txt","w"); fprintf(adh4_ratesfile,"deg_alpha\tdeg_beta\n"); fprintf(sh9_ratesfile,"deg_alpha\tdeg_beta\n"); global_adh4degchisq = 1; global_sh9degchisq = 1; for (alpha=1;alpha<2;alpha+=.1){ for (beta=1;beta<2;beta+=.1){ for (x=;x<1;x+=.1){ beta_values[(int)(x*1)] = ((pow(x,(alpha-1)))*(pow((1-x),(beta-1)))); total_betavalues = ; for (i = ;i<1;i++){ total_betavalues += beta_values[i]; for (i=;i<1;i++){ norm_betavalues[i] = beta_values[i]/total_betavalues; local_adh4degchisq = ; 12

13 local_sh9degchisq = ; //Calculate the chi-square statistic for each histogram for (i=;i<1;i++){ if (norm_betavalues[i] > ){ if (ADH4_degHistogram[i] > 4){ local_adh4degchisq += ((ADH4_degHistogram[i] - total_adh4degtranscripts*norm_betavalues[i])*(adh4_deghistogram[i] - total_adh4degtranscript if (SH9_degHistogram[i] > 4){ local_sh9degchisq += ((SH9_degHistogram[i] - total_sh9degtranscripts*norm_betavalues[i])*(sh9_deghistogram[i] - total_sh9degtranscripts*nor //Accept the values of alpha and beta if the chi-square statistic is better than //the previous best value if ((!isinf(local_adh4degchisq) && (!isnan(local_adh4degchisq)))){ if (local_adh4degchisq < global_adh4degchisq){ global_adh4degchisq = local_adh4degchisq; global_adh4degalpha = alpha; global_adh4degbeta = beta; if ((!isinf(local_sh9degchisq) && (!isnan(local_sh9degchisq)))){ if (local_sh9degchisq < global_sh9degchisq){ global_sh9degchisq = local_sh9degchisq; global_sh9degalpha = alpha; global_sh9degbeta = beta; fprintf(adh4_ratesfile,"%f\t%f\n",global_adh4degalpha,global_adh4degbeta); fprintf(sh9_ratesfile,"%f\t%f\n",global_sh9degalpha,global_sh9degbeta); fclose(adh4_ratesfile); fclose(sh9_ratesfile); printf("adh4 deg alpha: %f, deg beta: %f, deg rate: %f\n",global_adh4degalpha,global_adh4degbeta,(global_adh4degalpha-1)/((global_adh4degalpha+global printf("sh9 deg alpha: %f, deg beta: %f, deg rate: %f\n",global_sh9degalpha,global_sh9degbeta,(global_sh9degalpha-1)/((global_sh9degalpha+global_sh9d 13

ITCS 4/5145 Parallel Computing Test 1 5:00 pm - 6:15 pm, Wednesday February 17, 2016 Solutions Name:...

ITCS 4/5145 Parallel Computing Test 1 5:00 pm - 6:15 pm, Wednesday February 17, 2016 Solutions Name:... ITCS 4/5145 Parallel Computing Test 1 5:00 pm - 6:15 pm, Wednesday February 17, 016 Solutions Name:... Answer questions in space provided below questions. Use additional paper if necessary but make sure

More information

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico. OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 15, 2010 José Monteiro (DEI / IST) Parallel and Distributed Computing

More information

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico.

OpenMP and MPI. Parallel and Distributed Computing. Department of Computer Science and Engineering (DEI) Instituto Superior Técnico. OpenMP and MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico November 16, 2011 CPD (DEI / IST) Parallel and Distributed Computing 18

More information

Solution of Exercise Sheet 2

Solution of Exercise Sheet 2 Solution of Exercise Sheet 2 Exercise 1 (Cluster Computing) 1. Give a short definition of Cluster Computing. Clustering is parallel computing on systems with distributed memory. 2. What is a Cluster of

More information

Parallele Numerik. Blatt 1

Parallele Numerik. Blatt 1 Universität Konstanz FB Mathematik & Statistik Prof. Dr. M. Junk Dr. Z. Yang Ausgabe: 02. Mai; SS08 Parallele Numerik Blatt 1 As a first step, we consider two basic problems. Hints for the realization

More information

Message Passing Interface

Message Passing Interface Message Passing Interface DPHPC15 TA: Salvatore Di Girolamo DSM (Distributed Shared Memory) Message Passing MPI (Message Passing Interface) A message passing specification implemented

More information

For Ryerson EE Network

For Ryerson EE Network 10/25/2015 MPI Instructions For Ryerson EE Network Muhammad Ismail Sheikh DR. NAGI MEKHIEL Mpich-3.1.4 software is already installed on Ryerson EE network and anyone using the following instructions can

More information

OpenMP - II. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen

OpenMP - II. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen OpenMP - II Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT

More information

CS 470 Spring Mike Lam, Professor. OpenMP

CS 470 Spring Mike Lam, Professor. OpenMP CS 470 Spring 2017 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism

More information

CS 470 Spring Mike Lam, Professor. OpenMP

CS 470 Spring Mike Lam, Professor. OpenMP CS 470 Spring 2018 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism

More information

Hybrid MPI+OpenMP Parallel MD

Hybrid MPI+OpenMP Parallel MD Hybrid MPI+OpenMP Parallel MD Aiichiro Nakano Collaboratory for Advanced Computing & Simulations Department of Computer Science Department of Physics & Astronomy Department of Chemical Engineering & Materials

More information

Message Passing Interface

Message Passing Interface MPSoC Architectures MPI Alberto Bosio, Associate Professor UM Microelectronic Departement bosio@lirmm.fr Message Passing Interface API for distributed-memory programming parallel code that runs across

More information

Lecture 3. Review. CS 141 Lecture 3 By Ziad Kobti -Control Structures Examples -Built-in functions. Conditions: Loops: if( ) / else switch

Lecture 3. Review. CS 141 Lecture 3 By Ziad Kobti -Control Structures Examples -Built-in functions. Conditions: Loops: if( ) / else switch Lecture 3 CS 141 Lecture 3 By Ziad Kobti -Control Structures Examples -Built-in functions Review Conditions: if( ) / else switch Loops: for( ) do...while( ) while( )... 1 Examples Display the first 10

More information

ITCS 4145/5145 Assignment 2

ITCS 4145/5145 Assignment 2 ITCS 4145/5145 Assignment 2 Compiling and running MPI programs Author: B. Wilkinson and Clayton S. Ferner. Modification date: September 10, 2012 In this assignment, the workpool computations done in Assignment

More information

Parallel Programming Using MPI

Parallel Programming Using MPI Parallel Programming Using MPI Prof. Hank Dietz KAOS Seminar, February 8, 2012 University of Kentucky Electrical & Computer Engineering Parallel Processing Process N pieces simultaneously, get up to a

More information

Lecture 14: Mixed MPI-OpenMP programming. Lecture 14: Mixed MPI-OpenMP programming p. 1

Lecture 14: Mixed MPI-OpenMP programming. Lecture 14: Mixed MPI-OpenMP programming p. 1 Lecture 14: Mixed MPI-OpenMP programming Lecture 14: Mixed MPI-OpenMP programming p. 1 Overview Motivations for mixed MPI-OpenMP programming Advantages and disadvantages The example of the Jacobi method

More information

CS 426. Building and Running a Parallel Application

CS 426. Building and Running a Parallel Application CS 426 Building and Running a Parallel Application 1 Task/Channel Model Design Efficient Parallel Programs (or Algorithms) Mainly for distributed memory systems (e.g. Clusters) Break Parallel Computations

More information

mith College Computer Science CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut

mith College Computer Science CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut mith College CSC352 Week #7 Spring 2017 Introduction to MPI Dominique Thiébaut dthiebaut@smith.edu Introduction to MPI D. Thiebaut Inspiration Reference MPI by Blaise Barney, Lawrence Livermore National

More information

MPI introduction - exercises -

MPI introduction - exercises - MPI introduction - exercises - Paolo Ramieri, Maurizio Cremonesi May 2016 Startup notes Access the server and go on scratch partition: ssh a08tra49@login.galileo.cineca.it cd $CINECA_SCRATCH Create a job

More information

Introduction to Parallel Programming Message Passing Interface Practical Session Part I

Introduction to Parallel Programming Message Passing Interface Practical Session Part I Introduction to Parallel Programming Message Passing Interface Practical Session Part I T. Streit, H.-J. Pflug streit@rz.rwth-aachen.de October 28, 2008 1 1. Examples We provide codes of the theoretical

More information

Parallel Programming Assignment 3 Compiling and running MPI programs

Parallel Programming Assignment 3 Compiling and running MPI programs Parallel Programming Assignment 3 Compiling and running MPI programs Author: Clayton S. Ferner and B. Wilkinson Modification date: October 11a, 2013 This assignment uses the UNC-Wilmington cluster babbage.cis.uncw.edu.

More information

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI

Chip Multiprocessors COMP Lecture 9 - OpenMP & MPI Chip Multiprocessors COMP35112 Lecture 9 - OpenMP & MPI Graham Riley 14 February 2018 1 Today s Lecture Dividing work to be done in parallel between threads in Java (as you are doing in the labs) is rather

More information

C Functions. 5.2 Program Modules in C

C Functions. 5.2 Program Modules in C 1 5 C Functions 5.2 Program Modules in C 2 Functions Modules in C Programs combine user-defined functions with library functions - C standard library has a wide variety of functions Function calls Invoking

More information

Simple examples how to run MPI program via PBS on Taurus HPC

Simple examples how to run MPI program via PBS on Taurus HPC Simple examples how to run MPI program via PBS on Taurus HPC MPI setup There's a number of MPI implementations install on the cluster. You can list them all issuing the following command: module avail/load/list/unload

More information

Parallel programming in Madagascar. Chenlong Wang

Parallel programming in Madagascar. Chenlong Wang Parallel programming in Madagascar Chenlong Wang Why parallel? Time &Money Non-local resource Parallel Hardware 1 HPC structure Management web Calculation web 2 Outline Parallel calculation in Madagascar

More information

Parallel Computing. Lecture 17: OpenMP Last Touch

Parallel Computing. Lecture 17: OpenMP Last Touch CSCI-UA.0480-003 Parallel Computing Lecture 17: OpenMP Last Touch Mohamed Zahran (aka Z) mzahran@cs.nyu.edu http://www.mzahran.com Some slides from here are adopted from: Yun (Helen) He and Chris Ding

More information

Parallel Computing: Overview

Parallel Computing: Overview Parallel Computing: Overview Jemmy Hu SHARCNET University of Waterloo March 1, 2007 Contents What is Parallel Computing? Why use Parallel Computing? Flynn's Classical Taxonomy Parallel Computer Memory

More information

MPI. (message passing, MIMD)

MPI. (message passing, MIMD) MPI (message passing, MIMD) What is MPI? a message-passing library specification extension of C/C++ (and Fortran) message passing for distributed memory parallel programming Features of MPI Point-to-point

More information

Distributed Memory Programming with Message-Passing

Distributed Memory Programming with Message-Passing Distributed Memory Programming with Message-Passing Pacheco s book Chapter 3 T. Yang, CS240A Part of slides from the text book and B. Gropp Outline An overview of MPI programming Six MPI functions and

More information

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen OpenMP I Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS16/17 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press,

More information

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014

Introduction to MPI. Ekpe Okorafor. School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Introduction to MPI Ekpe Okorafor School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 Topics Introduction MPI Model and Basic Calls MPI Communication Summary 2 Topics Introduction

More information

MPI-Hello World. Timothy H. Kaiser, PH.D.

MPI-Hello World. Timothy H. Kaiser, PH.D. MPI-Hello World Timothy H. Kaiser, PH.D. tkaiser@mines.edu 1 Calls we will Use MPI_INIT( ierr ) MPI_Get_Processor_name(myname,resultlen,ierr) MPI_FINALIZE(ierr) MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr

More information

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016

MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 MPI and OpenMP (Lecture 25, cs262a) Ion Stoica, UC Berkeley November 19, 2016 Message passing vs. Shared memory Client Client Client Client send(msg) recv(msg) send(msg) recv(msg) MSG MSG MSG IPC Shared

More information

ECE Spring 2017 Exam 2

ECE Spring 2017 Exam 2 ECE 56300 Spring 2017 Exam 2 All questions are worth 5 points. For isoefficiency questions, do not worry about breaking costs down to t c, t w and t s. Question 1. Innovative Big Machines has developed

More information

Message Passing Interface. most of the slides taken from Hanjun Kim

Message Passing Interface. most of the slides taken from Hanjun Kim Message Passing Interface most of the slides taken from Hanjun Kim Message Passing Pros Scalable, Flexible Cons Someone says it s more difficult than DSM MPI (Message Passing Interface) A standard message

More information

Hybrid MPI+OpenMP+CUDA Programming

Hybrid MPI+OpenMP+CUDA Programming Hybrid MPI+OpenMP+CUDA Programming Aiichiro Nakano Collaboratory for Advanced Computing & Simulations Department of Computer Science Department of Physics & Astronomy Department of Chemical Engineering

More information

HPC Parallel Programing Multi-node Computation with MPI - I

HPC Parallel Programing Multi-node Computation with MPI - I HPC Parallel Programing Multi-node Computation with MPI - I Parallelization and Optimization Group TATA Consultancy Services, Sahyadri Park Pune, India TCS all rights reserved April 29, 2013 Copyright

More information

Hybrid MPI and OpenMP Parallel Programming

Hybrid MPI and OpenMP Parallel Programming Hybrid MPI and OpenMP Parallel Programming Jemmy Hu SHARCNET HPTC Consultant July 8, 2015 Objectives difference between message passing and shared memory models (MPI, OpenMP) why or why not hybrid? a common

More information

Hybrid Programming. John Urbanic Parallel Computing Specialist Pittsburgh Supercomputing Center. Copyright 2017

Hybrid Programming. John Urbanic Parallel Computing Specialist Pittsburgh Supercomputing Center. Copyright 2017 Hybrid Programming John Urbanic Parallel Computing Specialist Pittsburgh Supercomputing Center Copyright 2017 Assuming you know basic MPI This is a rare group that can discuss this topic meaningfully.

More information

ECE 563 Midterm 1 Spring 2015

ECE 563 Midterm 1 Spring 2015 ECE 563 Midterm 1 Spring 215 To make it easy not to cheat, this exam is open book and open notes. Please print and sign your name below. By doing so you signify that you have not received or given any

More information

Parallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy:

Parallel hardware. Distributed Memory. Parallel software. COMP528 MPI Programming, I. Flynn s taxonomy: COMP528 MPI Programming, I www.csc.liv.ac.uk/~alexei/comp528 Alexei Lisitsa Dept of computer science University of Liverpool a.lisitsa@.liverpool.ac.uk Flynn s taxonomy: Parallel hardware SISD (Single

More information

MPI-Hello World. Timothy H. Kaiser, PH.D.

MPI-Hello World. Timothy H. Kaiser, PH.D. MPI-Hello World Timothy H. Kaiser, PH.D. tkaiser@mines.edu 1 Calls we will Use MPI_INIT( ierr ) MPI_Get_Processor_name(myname,resultlen,ierr) MPI_FINALIZE(ierr) MPI_COMM_RANK( MPI_COMM_WORLD, myid, ierr

More information

Holland Computing Center Kickstart MPI Intro

Holland Computing Center Kickstart MPI Intro Holland Computing Center Kickstart 2016 MPI Intro Message Passing Interface (MPI) MPI is a specification for message passing library that is standardized by MPI Forum Multiple vendor-specific implementations:

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms http://sudalabissu-tokyoacjp/~reiji/pna16/ [ 5 ] MPI: Message Passing Interface Parallel Numerical Algorithms / IST / UTokyo 1 PNA16 Lecture Plan General Topics 1 Architecture

More information

Workshop Agenda Feb 25 th 2015

Workshop Agenda Feb 25 th 2015 Workshop Agenda Feb 25 th 2015 Time Presenter Title 09:30 T. König Talk bwhpc Concept & bwhpc-c5 - Federated User Support Activities 09:45 R. Walter Talk bwhpc architecture (bwunicluster, bwforcluster

More information

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP Parallel Programming with Compiler Directives OpenMP Clemens Grelck University of Amsterdam UvA-SARA High Performance Computing Course June 2013 OpenMP at a Glance Loop Parallelization Scheduling Parallel

More information

Parallel Programming

Parallel Programming Parallel Programming Lecture delivered by: Venkatanatha Sarma Y Assistant Professor MSRSAS-Bangalore 1 Session Objectives To understand the parallelization in terms of computational solutions. To understand

More information

OpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system

OpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system OpenMP A parallel language standard that support both data and functional Parallelism on a shared memory system Use by system programmers more than application programmers Considered a low level primitives

More information

OpenMPand the PGAS Model. CMSC714 Sept 15, 2015 Guest Lecturer: Ray Chen

OpenMPand the PGAS Model. CMSC714 Sept 15, 2015 Guest Lecturer: Ray Chen OpenMPand the PGAS Model CMSC714 Sept 15, 2015 Guest Lecturer: Ray Chen LastTime: Message Passing Natural model for distributed-memory systems Remote ( far ) memory must be retrieved before use Programmer

More information

Assignment 3 Key CSCI 351 PARALLEL PROGRAMMING FALL, Q1. Calculate log n, log n and log n for the following: Answer: Q2. mpi_trap_tree.

Assignment 3 Key CSCI 351 PARALLEL PROGRAMMING FALL, Q1. Calculate log n, log n and log n for the following: Answer: Q2. mpi_trap_tree. CSCI 351 PARALLEL PROGRAMMING FALL, 2015 Assignment 3 Key Q1. Calculate log n, log n and log n for the following: a. n=3 b. n=13 c. n=32 d. n=123 e. n=321 Answer: Q2. mpi_trap_tree.c The mpi_trap_time.c

More information

Lecture 4: OpenMP Open Multi-Processing

Lecture 4: OpenMP Open Multi-Processing CS 4230: Parallel Programming Lecture 4: OpenMP Open Multi-Processing January 23, 2017 01/23/2017 CS4230 1 Outline OpenMP another approach for thread parallel programming Fork-Join execution model OpenMP

More information

JURECA Tuning for the platform

JURECA Tuning for the platform JURECA Tuning for the platform Usage of ParaStation MPI 2017-11-23 Outline ParaStation MPI Compiling your program Running your program Tuning parameters Resources 2 ParaStation MPI Based on MPICH (3.2)

More information

Introduction to MPI. HY555 Parallel Systems and Grids Fall 2003

Introduction to MPI. HY555 Parallel Systems and Grids Fall 2003 Introduction to MPI HY555 Parallel Systems and Grids Fall 2003 Outline MPI layout Sending and receiving messages Collective communication Datatypes An example Compiling and running Typical layout of an

More information

Topics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP

Topics. Introduction. Shared Memory Parallelization. Example. Lecture 11. OpenMP Execution Model Fork-Join model 5/15/2012. Introduction OpenMP Topics Lecture 11 Introduction OpenMP Some Examples Library functions Environment variables 1 2 Introduction Shared Memory Parallelization OpenMP is: a standard for parallel programming in C, C++, and

More information

Assignment 3 MPI Tutorial Compiling and Executing MPI programs

Assignment 3 MPI Tutorial Compiling and Executing MPI programs Assignment 3 MPI Tutorial Compiling and Executing MPI programs B. Wilkinson: Modification date: February 11, 2016. This assignment is a tutorial to learn how to execute MPI programs and explore their characteristics.

More information

Tutorial 2: MPI. CS486 - Principles of Distributed Computing Papageorgiou Spyros

Tutorial 2: MPI. CS486 - Principles of Distributed Computing Papageorgiou Spyros Tutorial 2: MPI CS486 - Principles of Distributed Computing Papageorgiou Spyros What is MPI? An Interface Specification MPI = Message Passing Interface Provides a standard -> various implementations Offers

More information

Dr M Kasim A Jalil. Faculty of Mechanical Engineering UTM (source: Deitel Associates & Pearson)

Dr M Kasim A Jalil. Faculty of Mechanical Engineering UTM (source: Deitel Associates & Pearson) Lecture 9 Functions Dr M Kasim A Jalil Faculty of Mechanical Engineering UTM (source: Deitel Associates & Pearson) Objectives In this chapter, you will learn: To understand how to construct programs modularly

More information

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP

COMP4510 Introduction to Parallel Computation. Shared Memory and OpenMP. Outline (cont d) Shared Memory and OpenMP COMP4510 Introduction to Parallel Computation Shared Memory and OpenMP Thanks to Jon Aronsson (UofM HPC consultant) for some of the material in these notes. Outline (cont d) Shared Memory and OpenMP Including

More information

OpenMP - exercises -

OpenMP - exercises - OpenMP - exercises - Introduction to Parallel Computing with MPI and OpenMP P.Dagna Segrate, November 2016 Hello world! (Fortran) As a beginning activity let s compile and run the Hello program, either

More information

Functions. Computer System and programming in C Prentice Hall, Inc. All rights reserved.

Functions. Computer System and programming in C Prentice Hall, Inc. All rights reserved. Functions In general, functions are blocks of code that perform a number of pre-defined commands to accomplish something productive. You can either use the built-in library functions or you can create

More information

MPI Lab. How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums

MPI Lab. How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums MPI Lab Parallelization (Calculating π in parallel) How to split a problem across multiple processors Broadcasting input to other nodes Using MPI_Reduce to accumulate partial sums Sharing Data Across Processors

More information

Anomalies. The following issues might make the performance of a parallel program look different than it its:

Anomalies. The following issues might make the performance of a parallel program look different than it its: Anomalies The following issues might make the performance of a parallel program look different than it its: When running a program in parallel on many processors, each processor has its own cache, so the

More information

To connect to the cluster, simply use a SSH or SFTP client to connect to:

To connect to the cluster, simply use a SSH or SFTP client to connect to: RIT Computer Engineering Cluster The RIT Computer Engineering cluster contains 12 computers for parallel programming using MPI. One computer, phoenix.ce.rit.edu, serves as the master controller or head

More information

Chapter 3. Computer Science & Engineering 155E Computer Science I: Systems Engineering Focus. Existing Information.

Chapter 3. Computer Science & Engineering 155E Computer Science I: Systems Engineering Focus. Existing Information. Chapter 3 Computer Science & Engineering 155E Computer Science I: Systems Engineering Focus Lecture 03 - Introduction To Functions Christopher M. Bourke cbourke@cse.unl.edu 3.1 Building Programs from Existing

More information

Debugging. P.Dagna, M.Cremonesi. May 2015

Debugging. P.Dagna, M.Cremonesi. May 2015 Debugging P.Dagna, M.Cremonesi May 2015 Introduction Oneofthemostwidelyusedmethodstofindoutthereasonofa strange behavior in a program is the insertion of printf or write statements in the supposed critical

More information

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018 OpenMP 4 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

O.I. Streltsova, D.V. Podgainy, M.V. Bashashin, M.I.Zuev

O.I. Streltsova, D.V. Podgainy, M.V. Bashashin, M.I.Zuev High Performance Computing Technologies Lecture, Practical training 9 Parallel Computing with MPI: parallel algorithm for linear algebra https://indico-hlit.jinr.ru/event/120/ O.I. Streltsova, D.V. Podgainy,

More information

Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS. Teacher: Jan Kwiatkowski, Office 201/15, D-2

Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS. Teacher: Jan Kwiatkowski, Office 201/15, D-2 Introduction to Parallel and Distributed Systems - INZ0277Wcl 5 ECTS Teacher: Jan Kwiatkowski, Office 201/15, D-2 COMMUNICATION For questions, email to jan.kwiatkowski@pwr.edu.pl with 'Subject=your name.

More information

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc.

Introduction to MPI. SHARCNET MPI Lecture Series: Part I of II. Paul Preney, OCT, M.Sc., B.Ed., B.Sc. Introduction to MPI SHARCNET MPI Lecture Series: Part I of II Paul Preney, OCT, M.Sc., B.Ed., B.Sc. preney@sharcnet.ca School of Computer Science University of Windsor Windsor, Ontario, Canada Copyright

More information

15-440: Recitation 8

15-440: Recitation 8 15-440: Recitation 8 School of Computer Science Carnegie Mellon University, Qatar Fall 2013 Date: Oct 31, 2013 I- Intended Learning Outcome (ILO): The ILO of this recitation is: Apply parallel programs

More information

C programming for beginners

C programming for beginners C programming for beginners Lesson 2 December 10, 2008 (Medical Physics Group, UNED) C basics Lesson 2 1 / 11 Main task What are the values of c that hold bounded? x n+1 = x n2 + c (x ; c C) (Medical Physics

More information

Message Passing Interface - MPI

Message Passing Interface - MPI Message Passing Interface - MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico October 24, 2011 Many slides adapted from lectures by

More information

High Performance Computing Course Notes Message Passing Programming I

High Performance Computing Course Notes Message Passing Programming I High Performance Computing Course Notes 2008-2009 2009 Message Passing Programming I Message Passing Programming Message Passing is the most widely used parallel programming model Message passing works

More information

Computer Programming 5th Week loops (do-while, for), Arrays, array operations, C libraries

Computer Programming 5th Week loops (do-while, for), Arrays, array operations, C libraries Computer Programming 5th Week loops (do-while, for), Arrays, array operations, C libraries Hazırlayan Asst. Prof. Dr. Tansu Filik Computer Programming Previously on Bil 200 Low-Level I/O getchar, putchar,

More information

Faculty of Electrical and Computer Engineering Department of Electrical and Computer Engineering Program: Computer Engineering

Faculty of Electrical and Computer Engineering Department of Electrical and Computer Engineering Program: Computer Engineering Faculty of Electrical and Computer Engineering Department of Electrical and Computer Engineering Program: Computer Engineering Course Number EE 8218 011 Section Number 01 Course Title Parallel Computing

More information

Computer Science & Engineering 150A Problem Solving Using Computers

Computer Science & Engineering 150A Problem Solving Using Computers Computer Science & Engineering 150A Problem Solving Using Computers Lecture 03 - Stephen Scott (Adapted from Christopher M. Bourke) 1 / 41 Fall 2009 Chapter 3 3.1 Building Programs from Existing Information

More information

Distributed Memory Systems: Part IV

Distributed Memory Systems: Part IV Chapter 5 Distributed Memory Systems: Part IV Max Planck Institute Magdeburg Jens Saak, Scientific Computing II 293/342 The Message Passing Interface is a standard for creation of parallel programs using

More information

Basic Structure and Low Level Routines

Basic Structure and Low Level Routines SUZAKU Pattern Programming Framework Specification 1 - Structure and Low-Level Patterns B. Wilkinson, March 17, 2016. Suzaku is a pattern parallel programming framework developed at UNC-Charlotte that

More information

What is Hadoop? Hadoop is an ecosystem of tools for processing Big Data. Hadoop is an open source project.

What is Hadoop? Hadoop is an ecosystem of tools for processing Big Data. Hadoop is an open source project. Back to Hadoop 1 What is Hadoop? Hadoop is an ecosystem of tools for processing Big Data. Hadoop is an open source project. 2 A family of tools MapReduce HDFS HBase Hive Pig ZooKeeper Avro Sqoop Oozie

More information

Computer Science & Engineering 150A Problem Solving Using Computers. Chapter 3. Existing Information. Notes. Notes. Notes. Lecture 03 - Functions

Computer Science & Engineering 150A Problem Solving Using Computers. Chapter 3. Existing Information. Notes. Notes. Notes. Lecture 03 - Functions Computer Science & Engineering 150A Problem Solving Using Computers Lecture 03 - Functions Stephen Scott (Adapted from Christopher M. Bourke) Fall 2009 1 / 1 cbourke@cse.unl.edu Chapter 3 3.1 Building

More information

Message Passing Interface - MPI

Message Passing Interface - MPI Message Passing Interface - MPI Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico March 31, 2016 Many slides adapted from lectures by Bill

More information

Functions. Systems Programming Concepts

Functions. Systems Programming Concepts Functions Systems Programming Concepts Functions Simple Function Example Function Prototype and Declaration Math Library Functions Function Definition Header Files Random Number Generator Call by Value

More information

Computer Architecture

Computer Architecture Jens Teubner Computer Architecture Summer 2016 1 Computer Architecture Jens Teubner, TU Dortmund jens.teubner@cs.tu-dortmund.de Summer 2016 Jens Teubner Computer Architecture Summer 2016 2 Part I Programming

More information

Part One: The Files. C MPI Slurm Tutorial - Hello World. Introduction. Hello World! hello.tar. The files, summary. Output Files, summary

Part One: The Files. C MPI Slurm Tutorial - Hello World. Introduction. Hello World! hello.tar. The files, summary. Output Files, summary C MPI Slurm Tutorial - Hello World Introduction The example shown here demonstrates the use of the Slurm Scheduler for the purpose of running a C/MPI program. Knowledge of C is assumed. Having read the

More information

In-Class Guerrilla Development of MPI Examples

In-Class Guerrilla Development of MPI Examples Week 5 Lecture Notes In-Class Guerrilla Development of MPI Examples www.cac.cornell.edu/~slantz 1 Guerrilla Development? guer ril la (n.) - A member of an irregular, usually indigenous military or paramilitary

More information

High Performance Computing: Tools and Applications

High Performance Computing: Tools and Applications High Performance Computing: Tools and Applications Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology Lecture 2 OpenMP Shared address space programming High-level

More information

Introduction to the Message Passing Interface (MPI)

Introduction to the Message Passing Interface (MPI) Introduction to the Message Passing Interface (MPI) CPS343 Parallel and High Performance Computing Spring 2018 CPS343 (Parallel and HPC) Introduction to the Message Passing Interface (MPI) Spring 2018

More information

MPI introduction - exercises -

MPI introduction - exercises - MPI introduction - exercises - Introduction to Parallel Computing with MPI and OpenMP P. Ramieri May 2015 Hello world! (Fortran) As an ice breaking activity try to compile and run the Helloprogram, either

More information

START: P0: A[] = { } P1: A[] = { } P2: A[] = { } P3: A[] = { }

START: P0: A[] = { } P1: A[] = { } P2: A[] = { } P3: A[] = { } Problem 1 (10 pts): Recall the Selection Sort algorithm. Retrieve source code for the serial version from somewhere convenient. Create a parallel version of this algorithm which is targeted at a Distributed

More information

COMP s1 Lecture 1

COMP s1 Lecture 1 COMP1511 18s1 Lecture 1 1 Numbers In, Numbers Out Andrew Bennett more printf variables scanf 2 Before we begin introduce yourself to the person sitting next to you why did

More information

Lecture 6: Parallel Matrix Algorithms (part 3)

Lecture 6: Parallel Matrix Algorithms (part 3) Lecture 6: Parallel Matrix Algorithms (part 3) 1 A Simple Parallel Dense Matrix-Matrix Multiplication Let A = [a ij ] n n and B = [b ij ] n n be n n matrices. Compute C = AB Computational complexity of

More information

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/

Introduction to MPI. Ricardo Fonseca. https://sites.google.com/view/rafonseca2017/ Introduction to MPI Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Distributed Memory Programming (MPI) Message Passing Model Initializing and terminating programs Point to point

More information

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013 OpenMP António Abreu Instituto Politécnico de Setúbal 1 de Março de 2013 António Abreu (Instituto Politécnico de Setúbal) OpenMP 1 de Março de 2013 1 / 37 openmp what? It s an Application Program Interface

More information

CS3157: Advanced Programming. Outline

CS3157: Advanced Programming. Outline CS3157: Advanced Programming Lecture #8 Feb 27 Shlomo Hershkop shlomo@cs.columbia.edu 1 Outline More c Preprocessor Bitwise operations Character handling Math/random Review for midterm Reading: k&r ch

More information

Distributed and Parallel Technology

Distributed and Parallel Technology Distributed and Parallel Technology Parallel Performance Tuning Hans-Wolfgang Loidl http://www.macs.hw.ac.uk/~hwloidl School of Mathematical and Computer Sciences Heriot-Watt University, Edinburgh 0 No

More information

Hybrid Programming with MPI and OpenMP. B. Estrade

Hybrid Programming with MPI and OpenMP. B. Estrade Hybrid Programming with MPI and OpenMP B. Estrade Objectives understand the difference between message passing and shared memory models; learn of basic models for utilizing both message

More information

Debugging process. The debugging process can be divided into four main steps: 1. Start your program, specifying anything that might affect its

Debugging process. The debugging process can be divided into four main steps: 1. Start your program, specifying anything that might affect its Debugging Introduction One of the most widely used methods to find out the reason of a strange behavior in a program is the insertion of printf or write statements in the supposed critical area. However

More information

The Message Passing Model

The Message Passing Model Introduction to MPI The Message Passing Model Applications that do not share a global address space need a Message Passing Framework. An application passes messages among processes in order to perform

More information

Parallel Programming Overview

Parallel Programming Overview Parallel Programming Overview Introduction to High Performance Computing 2019 Dr Christian Terboven 1 Agenda n Our Support Offerings n Programming concepts and models for Cluster Node Core Accelerator

More information

PC (PC Cluster Building and Parallel Computing)

PC (PC Cluster Building and Parallel Computing) PC (PC Cluster Building and Parallel Computing) 2011 3 30 כ., כ כ. כ. PC. כ. כ. 1 כ, 2. 1 PC. PC. 2. MPI(Message Passing Interface),. 3 1 PC 7 1 PC... 7 1.1... 7 1.2... 10 2 (Master computer) Linux...

More information