Review!of!Last!Lecture! CS!61C:!Great!Ideas!in!! Computer!Architecture! Synchroniza+on,- OpenMP- Great!Idea!#4:!Parallelism! Agenda!

Size: px
Start display at page:

Download "Review!of!Last!Lecture! CS!61C:!Great!Ideas!in!! Computer!Architecture! Synchroniza+on,- OpenMP- Great!Idea!#4:!Parallelism! Agenda!"

Transcription

1 CS61C:GreatIdeasin ComputerArchitecture Miki$Lus(g ReviewofLastLecture Mul?processorsystemsusessharedmemory (singleaddressspace) Cachecoherenceimplementssharedmemory evenwithmul?plecopiesinmul?plecaches Trackstateofblocksrela?vetoothercaches (e.g.moesiprotocol) Falsesharingaconcern 1 2 ParallelRequests Assignedtocomputer e.g.search Garcia ParallelThreads Assignedtocore e.g.lookup,ads GreatIdea#4:Parallelism ParallelInstruc?ons >1instruc?on@one?me e.g.5pipelinedinstruc?ons ParallelData >1dataitem@one?me e.g.addof4pairsofwords Hardwaredescrip?ons Allgatesfunc?oningin parallelatsame?me Leverage- Parallelism-&- Achieve-High- Performance- So=ware Hardware- Synchroniza+on,- OpenMP- Warehouse Scale Computer Core$ Instruc?onUnit(s) CacheMemory Core Func?onal Unit(s) A 0 +B 0 A 1 +B 1 A 2 +B 2 A 3 +B 3 Computer$ Memory Input/Output Smart Phone Core Logic$Gates$ 3 Agenda Synchroniza?onaACrashCourse Administrivia OpenMPIntroduc?on OpenMPDirec?ves Workshare Synchroniza?on Bonus:CommonOpenMPPiballs 4 DataRacesandSynchroniza?on Analogy:BuyingMilk Twomemoryaccessesformadata-race-if differentthreadsaccessthesameloca?on,andat leastoneisawrite,andtheyoccuroneader another Meansthattheresultofaprogramcanvary dependingonchance(whichthreadranfirst?) Avoiddataracesbysynchronizingwri?ngandreading togetdeterminis?cbehavior Synchroniza?ondonebyuseralevelrou?nesthat relyonhardwaresynchroniza?oninstruc?ons 5 Yourfridgehasnomilk.Youandyour roommatewillreturnfromclassesatsome pointandcheckthefridge Whoevergetshomefirstwillcheckthefridge, goandbuymilk,andreturn Whatiftheotherpersongetsbackwhilethe firstpersonisbuyingmilk? You vejustboughttwiceasmuchmilkasyou need Itwould vehelpedtohaveledanote 6

2 LockSynchroniza?on(1/2) Usea Lock tograntaccesstoaregion (cri+cal-sec+on)sothatonlyonethreadcan operateata?me Needallprocessorstobeabletoaccessthelock, sousealoca?oninsharedmemoryasthe-lock- Processorsreadlockandeitherwait(iflocked) orsetlockandgointocri?calsec?on 0meanslockisfree/open/unlocked/lockoff 1meanslockisset/closed/locked/lockon LockSynchroniza?on(2/2) Pseudocode: Check lock Canloop/idlehere iflocked Set the lock Critical section (e.g. change shared variables) Unset the lock 7 8 PossibleLockImplementa?on Lock(a.k.a.busywait) Get_lock: # $s0 -> addr of lock addiu $t1,$zero,1 # t1 = Locked value Loop: lw $t0,0($s0) # load lock bne $t0,$zero,loop # loop if locked Lock: sw $t1,0($s0) # Unlocked, so lock Unlock Unlock: sw $zero,0($s0) Anyproblemswiththis? 9 PossibleLockProblem Thread1 addiu $t1,$zero,1 Loop: lw $t0,0($s0) bne $t0,$zero,loop Lock: sw $t1,0($s0) Thread2 addiu $t1,$zero,1 Loop: lw $t0,0($s0) bne $t0,$zero,loop Lock: sw $t1,0($s0) Time$ Both-threads-think-they-have-set-the-lock--- Exclusive-access-not-guaranteed- 10 HardwareSynchroniza?on Synchroniza?oninMIPS Hardwaresupportrequiredtopreventan interloper(anotherthread)fromchangingthe value Atomic-read/writememoryopera?on Nootheraccesstotheloca?onallowedbetween thereadandwrite Howbesttoimplementinsodware? Singleinstr?Atomicswapofregister memory Pairofinstr?Oneforread,oneforwrite 11 Load-linked: - -ll rt,off(rs) Store-condi+onal: sc rt,off(rs) Returns1(success)ifloca?onhasnotchanged sincethell Returns0(failure)ifloca?onhaschanged Notethatscclobberstheregistervaluebeing stored(rt) Needtohaveacopyelsewhereifyouplanon repea?ngonfailureorusingvaluelater 12

3 Synchroniza?oninMIPSExample Atomicswap(totest/setlockvariable) Exchangecontentsofregisterandmemory: $s4 Mem($s1) try: add $t0,$zero,$s4 #copy value ll $t1,0($s1) #load linked sc $t0,0($s1) #store conditional beq $t0,$zero,try #loop if sc fails add $s4,$zero,$t1 #load value in $s4 scwouldfailifanotherthreadsexecutesschere 13 TestaandaSet Inasingleatomicopera?on: Test-toseeifamemoryloca?onisset (containsa1) Set-it(to1)ifitisn t(itcontainedazero whentested) OtherwiseindicatethattheSetfailed, sotheprogramcantryagain Whileaccessing,nootherinstruc?on canmodifythememoryloca?on, includingothertestaandaset instruc?ons Usefulforimplemen?nglock opera?ons 14 TestaandaSetinMIPS Example:MIPSsequencefor implemen?ngat&sat($s1) Try: addiu $t0,$zero,1 ll $t1,0($s1) bne $t1,$zero,try sc $t0,0($s1) beq $t0,$zero,try Locked: # critical section Unlock: sw $zero,0($s1) Ques(on:$$Considerthefollowingcodewhen executedconcurrentlybytwothreads. Whatpossiblevaluescanresultin*($s0)? # *($s0) = 100 lw $t0,0($s0) addi $t0,$t0,1 sw $t0,0($s0) 101$or$ ,$101,$or$ $or$ Agenda Synchroniza?onaACrashCourse Administrivia OpenMPIntroduc?on OpenMPDirec?ves Workshare Synchroniza?on Bonus:CommonOpenMPPiballs Project2:MapReduce Part1:DueOct27 th Part2:DueNov2 nd Administrivia Wereceivedagrantfromamazon totalling>$60kforproject

4 Agenda Synchroniza?onaACrashCourse Administrivia OpenMPIntroduc?on OpenMPDirec?ves Workshare Synchroniza?on Bonus:CommonOpenMPPiballs WhatisOpenMP? APIusedformul?athreaded,sharedmemory parallelism CompilerDirec?ves Run?meLibraryRou?nes EnvironmentVariables Portable Standardized Resources:hvp:// andhvp://compu?ng.llnl.gov/tutorials/openmp/ OpenMPSpecifica?on 21 SharedMemoryModelwithExplicit ThreadabasedParallelism Mul?plethreadsinasharedmemory environment,explicitprogrammingmodelwith fullprogrammercontroloverparalleliza?on Pros:$ Takesadvantageofsharedmemory,programmer neednotworry(thatmuch)aboutdataplacement Compilerdirec?vesaresimpleandeasytouse Legacyserialcodedoesnotneedtoberewriven Cons:$ Codecanonlyberuninsharedmemoryenvironments CompilermustsupportOpenMP(e.g.gcc4.2) 22 OpenMPinCS61C OpenMPProgrammingModel OpenMPisbuiltontopofC,soyoudon thaveto learnawholenewprogramminglanguage Makesuretoadd#include <omp.h> Compilewithflag:gcc -fopenmp Mostlyjustafewlinesofcodetolearn YouwillNOTbecomeexpertsatOpenMP Useslidesasreference,willlearntouseinlab Key$ideas:$ Sharedvs.Privatevariables OpenMPdirec?vesforparalleliza?on,worksharing, synchroniza?on 23 Fork$B$Join$Model:$ OpenMPprogramsbeginassingleprocess(master-thread) andexecutessequen?allyun?lthefirstparallelregion constructisencountered FORK:--Masterthreadthencreatesateamofparallelthreads Statementsinprogramthatareenclosedbytheparallelregion constructareexecutedinparallelamongthevariousthreads JOIN:Whentheteamthreadscompletethestatementsinthe parallelregionconstruct,theysynchronizeandterminate, leavingonlythemasterthread 24

5 OpenMPExtendsCwithPragmas PragmasareapreprocessormechanismC providesforlanguageextensions Commonlyimplementedpragmas: structurepacking,symbolaliasing,floa?ng pointexcep?onmodes(notcoveredin61c) GoodmechanismforOpenMPbecause compilersthatdon'trecognizeapragmaare supposedtoignorethem Runsonsequen?alcomputerevenwith embeddedpragmas 25 parallelpragmaandscope BasicOpenMPconstructforparalleliza?on: #pragma omp parallel { /* code goes here */ Eachthreadrunsacopyofcodewithintheblock ThreadschedulingisnonPdeterminis+c- OpenMPdefaultissharedvariables Tomakeprivate,needtodeclarewithpragma: #pragma omp parallel private (x) Thisisannoying,butcurlybraceMUSTgoonseparate linefrom#pragma 26 ThreadCrea?on HowmanythreadswillOpenMPcreate? DefinedbyOMP_NUM_THREADS environmentvariable(orcodeprocedurecall) Setthisvariabletothemaximumnumberof threadsyouwantopenmptouse Usuallyequalsthenumberofcoresinthe underlyinghardwareonwhichtheprogramisrun OMP_NUM_THREADS OpenMPintrinsictosetnumberofthreads: omp_set_num_threads(x); OpenMPintrinsictogetnumberofthreads: num_th = omp_get_num_threads(); OpenMPintrinsictogetThreadIDnumber: th_id = omp_get_thread_num(); #include <stdio.h> #include <omp.h> int main () { int nthreads, tid; ParallelHelloWorld /* Fork team of threads with private var tid */ #pragma omp parallel private(tid) { tid = omp_get_thread_num(); /* get thread id */ printf("hello World from thread = %d\n", tid); /* Only master thread does this */ if (tid == 0) { nthreads = omp_get_num_threads(); printf("number of threads = %d\n", nthreads); /* All threads join master and terminate */ 29 Agenda Synchroniza?onaACrashCourse Administrivia OpenMPIntroduc?on OpenMPDirec?ves Workshare Synchroniza?on Bonus:CommonOpenMPPiballs 30

6 OpenMPDirec?ves(WorkaSharing) Thesearedefinedwithinaparallelsec?on Sharesitera?onsofa loopacrossthethreads Eachsec?onisexecuted byaseparatethread Serializestheexecu?on ofathread 31 ParallelStatementShorthand #pragma omp parallel { #pragma omp for for(i=0;i<len;i++) { canbeshortenedto: #pragma omp parallel for for(i=0;i<len;i++) { Alsoworksforsections Thisistheonly direc?veinthe parallelsec?on 32 BuildingBlock:forloop Parallelforpragma- for (i=0; i<max; i++) zero[i] = 0; Breakfor-loop-intochunks,andallocateeachtoa separatethread e.g.ifmax=100with2threads: assign0a49tothread0,and50a99tothread1 Musthaverela?velysimple shape foranopenmpa awarecompilertobeabletoparallelizeit Necessaryfortheruna?mesystemtobeabletodetermine howmanyoftheloopitera?onstoassigntoeachthread Noprematureexitsfromtheloopallowed i.e.nobreak,return,exit,gotostatements Ingeneral, don tjump outsideofany pragmablock 33 #pragma omp parallel for for (i=0; i<max; i++) zero[i] = 0;$ Masterthreadcreatesaddi?onalthreads, eachwithaseparateexecu?oncontext Allvariablesdeclaredoutsideforloopare sharedbydefault,exceptforloopindex whichisprivate-perthread(why?) Implicitsynchroniza?onatendofforloop Divideindexregionssequen?allyperthread Thread0gets0,1,,(max/n)a1; Thread1getsmax/n,max/n+1,,2*(max/n)a1 Why? 34 OpenMPTiming MatrixMul?plyinOpenMP Elapsedwallclock?me: double omp_get_wtime(void); Returnselapsedwallclock?meinseconds Timeismeasuredperthread,noguaranteecanbe madethattwodis?nctthreadsmeasurethesame?me Timeismeasuredfrom some?meinthepast, sosubtractresultsoftwocallsto omp_get_wtime togetelapsed?me 35 start_time = omp_get_wtime(); #pragma omp parallel for private(tmp, i, j, k) for (i=0; i<mdim; i++){ for (j=0; j<ndim; j++){ tmp = 0.0; for( k=0; k<pdim; k++){ Outerloopspread acrossnthreads; innerloopsinsidea singlethread /* C(i,j) = sum(over k) A(i,k) * B(k,j)*/ tmp += *(A+(i*Pdim+k)) * *(B+(k*Ndim+j)); *(C+(i*Ndim+j)) = tmp; run_time = omp_get_wtime() - start_time; 36

7 NotesonMatrixMul?plyExample Moreperformanceop?miza?onsavailable: Highercompiler-op+miza+on(aO2,aO3)toreduce numberofinstruc?onsexecuted Cache-blockingtoimprovememoryperformance UsingSIMDSSEinstruc?onstoraisefloa?ngpoint computa?onrate(dlp) 37 OpenMPDirec?ves(Synchroniza?on) Thesearedefinedwithinaparallelsec?on master Codeblockexecutedonlybythemasterthread (allotherthreadsskip) critical Codeblockexecutedbyonlyonethreadata?me atomic Specificmemoryloca?onmustbeupdatedatomically (likeaminiacriticalsec?onforwri?ngtomemory) Appliestosinglestatement,notcodeblock 38 OpenMPReduc?on Reduc+onspecifiesthatoneormoreprivate variablesarethesubjectofareduc?on opera?onatendofparallelregion Clausereduction(operation:var) Opera+on:Operatortoperformonthevariables attheendoftheparallelregion Var:--Oneormorevariablesonwhichtoperform scalarreduc?on #pragma omp for reduction(+:nsum) for (i = START ; i <= END ; i++) nsum += i; 39 Summary Dataracesleadtosubtleparallelbugs Synchroniza?onviahardwareprimi?ves: MIPSdoesitwithLoadLinked+StoreCondi?onal OpenMPassimpleparallelextensiontoC Duringparallelfork,beawareofwhichvariables shouldbesharedvs.privateamongthreads Workasharingaccomplishedwithfor/sec?ons Synchroniza?onaccomplishedwithcri?cal/ atomic/reduc?on 40 Agenda Youareresponsibleforthematerialcontained onthefollowingslides,thoughwemaynothave enough?metogettotheminlecture. Theyhavebeenpreparedinawaythatshould beeasilyreadableandthematerialwillbe toucheduponinthefollowinglecture. Synchroniza?onaACrashCourse Administrivia OpenMPIntroduc?on OpenMPDirec?ves Workshare Synchroniza?on Bonus:CommonOpenMPPiballs 41 42

8 OpenMPPiball#1:DataDependencies OpenMPPiball#2:SharingIssues Considerthefollowingcode: a[0] = 1; for(i=1; i<5000; i++) a[i] = i + a[i-1]; Therearedependenciesbetweenloop itera?ons Spli ngthisloopbetweenthreadsdoesnot guaranteeinaorderexecu?on Outoforderloopexecu?onwillresultin undefinedbehavior(i.e.likelywrongresult) 43 Considerthefollowingloop: #pragma omp parallel for for(i=0; i<n; i++){ temp = 2.0*a[i]; a[i] = temp; b[i] = c[i]/temp; tempisasharedvariable #pragma omp parallel for private(temp) for(i=0; i<n; i++){ temp = 2.0*a[i]; a[i] = temp; b[i] = c[i]/temp; 44 OpenMPPiball#3:Upda?ngShared VariablesSimultaneously Nowconsideraglobalsum: for(i=0; i<n; i++) sum = sum + a[i]; Thiscanbedonebysurroundingthesumma?onbya critical/atomicsec?onorreductionclause: #pragma omp parallel for reduction(+:sum) { for(i=0; i<n; i++) sum = sum + a[i]; Compilercangeneratehighlyefficientcodefor reduction 45 OpenMPPiball#4:ParallelOverhead Spawningandreleasingthreadsresultsin significantoverhead Bevertohavefewerbutlargerparallel regions Parallelizeoverthelargestloopthatyoucan (eventhoughitwillinvolvemoreworktodeclare alloftheprivatevariablesandeliminate dependencies) 46 OpenMPPiball#4:ParallelOverhead start_time = omp_get_wtime(); Toomuchoverheadinthread genera?ontohavethisstatement for (i=0; i<ndim; i++){ runthisfrequently. for (j=0; j<mdim; j++){ Poorchoiceoflooptoparallelize. tmp = 0.0; #pragma omp parallel for reduction(+:tmp) for( k=0; k<pdim; k++){ /* C(i,j) = sum(over k) A(i,k) * B(k,j)*/ tmp += *(A+(i*Ndim+k)) * *(B+(k*Pdim+j)); *(C+(i*Ndim+j)) = tmp; run_time = omp_get_wtime() - start_time; 47

CS 61C: Great Ideas in Computer Architecture. Synchronization, OpenMP

CS 61C: Great Ideas in Computer Architecture. Synchronization, OpenMP CS 61C: Great Ideas in Computer Architecture Synchronization, OpenMP Guest Lecturer: Justin Hsia 3/15/2013 Spring 2013 Lecture #22 1 Review of Last Lecture Multiprocessor systems uses shared memory (single

More information

CS 61C: Great Ideas in Computer Architecture. Synchroniza+on, OpenMP. Senior Lecturer SOE Dan Garcia

CS 61C: Great Ideas in Computer Architecture. Synchroniza+on, OpenMP. Senior Lecturer SOE Dan Garcia CS 61C: Great Ideas in Computer Architecture Synchroniza+on, OpenMP Senior Lecturer SOE Dan Garcia 1 Review of Last Lecture Mul@processor systems uses shared memory (single address space) Cache coherence

More information

You Are Here! Peer Instruction: Synchronization. Synchronization in MIPS. Lock and Unlock Synchronization 7/19/2011

You Are Here! Peer Instruction: Synchronization. Synchronization in MIPS. Lock and Unlock Synchronization 7/19/2011 CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread Level Parallelism: OpenMP Instructor: Michael Greenbaum 1 Parallel Requests Assigned to computer e.g., Search Katz Parallel Threads

More information

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors CS 61C: Great Ideas in Computer Architecture OpenMP, Transistors Instructor: Justin Hsia 7/23/2013 Summer 2013 Lecture #17 1 Review of Last Lecture Amdahl s Law limits benefits of parallelization Multiprocessor

More information

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors CS 61C: Great Ideas in Computer Architecture OpenMP, Transistors Instructor: Justin Hsia 7/17/2012 Summer 2012 Lecture #17 1 Review of Last Lecture Amdahl s Law limits benefits of parallelization Multiprocessor

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP Instructors: Nicholas Weaver & Vladimir Stojanovic http://inst.eecs.berkeley.edu/~cs61c/ Review

More information

Instructor: Randy H. Katz hbp://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #16. Warehouse Scale Computer

Instructor: Randy H. Katz hbp://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #16. Warehouse Scale Computer CS 61C: Great Ideas in Computer Architecture OpenMP Instructor: Randy H. Katz hbp://inst.eecs.berkeley.edu/~cs61c/fa13 10/23/13 Fall 2013 - - Lecture #16 1 New- School Machine Structures (It s a bit more

More information

CS 61C: Great Ideas in Computer Architecture Lecture 20: Thread- Level Parallelism (TLP) and OpenMP Part 2

CS 61C: Great Ideas in Computer Architecture Lecture 20: Thread- Level Parallelism (TLP) and OpenMP Part 2 CS 61C: Great Ideas in Computer Architecture Lecture 20: Thread- Level Parallelism (TLP) and OpenMP Part 2 Instructor: Sagar Karandikar sagark@eecs.berkeley.edu hcp://inst.eecs.berkeley.edu/~cs61c 1 Review:

More information

CS 61C: Great Ideas in Computer Architecture Lecture 20: Thread- Level Parallelism (TLP) and OpenMP Part 2

CS 61C: Great Ideas in Computer Architecture Lecture 20: Thread- Level Parallelism (TLP) and OpenMP Part 2 CS 61C: Great Ideas in Computer Architecture Lecture 20: Thread- Level Parallelism (TLP) and OpenMP Part 2 Instructor: Sagar Karandikar sagark@eecs.berkeley.edu hgp://inst.eecs.berkeley.edu/~cs61c 1 Review:

More information

CS 61C: Great Ideas in Computer Architecture Lecture 19: Thread- Level Parallelism and OpenMP Intro

CS 61C: Great Ideas in Computer Architecture Lecture 19: Thread- Level Parallelism and OpenMP Intro CS 61C: Great Ideas in Computer Architecture Lecture 19: Thread- Level Parallelism and OpenMP Intro Instructor: Sagar Karandikar sagark@eecs.berkeley.edu hbp://inst.eecs.berkeley.edu/~cs61c 1 Review Amdahl

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP Instructors: John Wawrzynek & Vladimir Stojanovic http://inst.eecs.berkeley.edu/~cs61c/ Review

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread Level Parallelism: OpenMP

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread Level Parallelism: OpenMP CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread Level Parallelism: OpenMP Instructors: Randy H. Katz David A. PaGerson hgp://inst.eecs.berkeley.edu/~cs61c/sp11 3/15/11 1 Spring

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread- Level Parallelism (TLP) and OpenMP

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread- Level Parallelism (TLP) and OpenMP CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread- Level Parallelism (TLP) and OpenMP Instructors: Krste Asanovic & Vladimir Stojanovic hap://inst.eecs.berkeley.edu/~cs61c/ Review

More information

CS 110 Computer Architecture

CS 110 Computer Architecture CS 110 Computer Architecture Thread-Level Parallelism (TLP) and OpenMP Intro Instructor: Sören Schwertfeger http://shtech.org/courses/ca/ School of Information Science and Technology SIST ShanghaiTech

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread- Level Parallelism (TLP) and OpenMP. Serializes the execunon of a thread

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread- Level Parallelism (TLP) and OpenMP. Serializes the execunon of a thread CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread- Level Parallelism (TLP) and OpenMP Instructors: Krste Asanovic & Vladimir Stojanovic hep://inst.eecs.berkeley.edu/~cs61c/ Review

More information

CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency

CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency Instructor: Bernhard Boser and Randy H. Katz http://inst.eecs.berkeley.edu/~cs61c/fa16 11/10/16 Fall 2016 -- Lecture #20 1 New-School

More information

Parallel Programming with OpenMP. CS240A, T. Yang

Parallel Programming with OpenMP. CS240A, T. Yang Parallel Programming with OpenMP CS240A, T. Yang 1 A Programmer s View of OpenMP What is OpenMP? Open specification for Multi-Processing Standard API for defining multi-threaded shared-memory programs

More information

CS3350B Computer Architecture

CS3350B Computer Architecture CS3350B Computer Architecture Winter 2015 Lecture 7.2: Multicore TLP (1) Marc Moreno Maza www.csd.uwo.ca/courses/cs3350b [Adapted from lectures on Computer Organization and Design, Patterson & Hennessy,

More information

Tieing the Threads Together

Tieing the Threads Together Tieing the Threads Together 1 Review Sequential software is slow software SIMD and MIMD are paths to higher performance MIMD thru: multithreading processor cores (increases utilization), Multicore processors

More information

OpenMP, Part 2. EAS 520 High Performance Scientific Computing. University of Massachusetts Dartmouth. Spring 2015

OpenMP, Part 2. EAS 520 High Performance Scientific Computing. University of Massachusetts Dartmouth. Spring 2015 OpenMP, Part 2 EAS 520 High Performance Scientific Computing University of Massachusetts Dartmouth Spring 2015 References This presentation is almost an exact copy of Dartmouth College's openmp tutorial.

More information

CS 61C: Great Ideas in Computer Architecture. Amdahl s Law, Thread Level Parallelism

CS 61C: Great Ideas in Computer Architecture. Amdahl s Law, Thread Level Parallelism CS 61C: Great Ideas in Computer Architecture Amdahl s Law, Thread Level Parallelism Instructor: Alan Christopher 07/17/2014 Summer 2014 -- Lecture #15 1 Review of Last Lecture Flynn Taxonomy of Parallel

More information

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines What is OpenMP? What does OpenMP stands for? What does OpenMP stands for? Open specifications for Multi

More information

CS 61C: Great Ideas in Computer Architecture Thread Level Parallelism (TLP)

CS 61C: Great Ideas in Computer Architecture Thread Level Parallelism (TLP) CS 61C: Great Ideas in Computer Architecture Thread Level Parallelism (TLP) Instructor: David A. Pa@erson h@p://inst.eecs.berkeley.edu/~cs61c/sp12 Spring 2012 - - Lecture #16 1 New- School Machine Structures

More information

EPL372 Lab Exercise 5: Introduction to OpenMP

EPL372 Lab Exercise 5: Introduction to OpenMP EPL372 Lab Exercise 5: Introduction to OpenMP References: https://computing.llnl.gov/tutorials/openmp/ http://openmp.org/wp/openmp-specifications/ http://openmp.org/mp-documents/openmp-4.0-c.pdf http://openmp.org/mp-documents/openmp4.0.0.examples.pdf

More information

High Performance Computing: Tools and Applications

High Performance Computing: Tools and Applications High Performance Computing: Tools and Applications Edmond Chow School of Computational Science and Engineering Georgia Institute of Technology Lecture 2 OpenMP Shared address space programming High-level

More information

ECE 574 Cluster Computing Lecture 10

ECE 574 Cluster Computing Lecture 10 ECE 574 Cluster Computing Lecture 10 Vince Weaver http://www.eece.maine.edu/~vweaver vincent.weaver@maine.edu 1 October 2015 Announcements Homework #4 will be posted eventually 1 HW#4 Notes How granular

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Ricardo Fonseca https://sites.google.com/view/rafonseca2017/ Outline Shared Memory Programming OpenMP Fork-Join Model Compiler Directives / Run time library routines Compiling and

More information

CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency

CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency Instructors: Krste Asanović and Randy H. Katz http://inst.eecs.berkeley.edu/~cs61c/fa17 11/8/17 Fall 2017 -- Lecture #20 1 Parallel

More information

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread Level Parallelism

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread Level Parallelism CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread Level Parallelism Instructors: Randy H. Katz David A. PaFerson hfp://inst.eecs.berkeley.edu/~cs61c/sp11 3/10/11 1 Spring 2011 -

More information

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013 OpenMP António Abreu Instituto Politécnico de Setúbal 1 de Março de 2013 António Abreu (Instituto Politécnico de Setúbal) OpenMP 1 de Março de 2013 1 / 37 openmp what? It s an Application Program Interface

More information

Lecture 4: OpenMP Open Multi-Processing

Lecture 4: OpenMP Open Multi-Processing CS 4230: Parallel Programming Lecture 4: OpenMP Open Multi-Processing January 23, 2017 01/23/2017 CS4230 1 Outline OpenMP another approach for thread parallel programming Fork-Join execution model OpenMP

More information

cp r /global/scratch/workshop/openmp-wg-oct2017 cd openmp-wg-oct2017 && ls Current directory

cp r /global/scratch/workshop/openmp-wg-oct2017 cd openmp-wg-oct2017 && ls Current directory $ ssh username@grex.westgrid.ca username@tatanka ~ username@bison ~ cp r /global/scratch/workshop/openmp-wg-oct2017 cd openmp-wg-oct2017 && ls Current directory sh get_node_workshop.sh [ username@n139

More information

/Users/engelen/Sites/HPC folder/hpc/openmpexamples.c

/Users/engelen/Sites/HPC folder/hpc/openmpexamples.c /* Subset of these examples adapted from: 1. http://www.llnl.gov/computing/tutorials/openmp/exercise.html 2. NAS benchmarks */ #include #include #ifdef _OPENMP #include #endif

More information

Shared Memory Programming Model

Shared Memory Programming Model Shared Memory Programming Model Ahmed El-Mahdy and Waleed Lotfy What is a shared memory system? Activity! Consider the board as a shared memory Consider a sheet of paper in front of you as a local cache

More information

Assignment 1 OpenMP Tutorial Assignment

Assignment 1 OpenMP Tutorial Assignment Assignment 1 OpenMP Tutorial Assignment B. Wilkinson and C Ferner: Modification date Aug 5, 2014 Overview In this assignment, you will write and execute run some simple OpenMP programs as a tutorial. First

More information

CS 470 Spring Mike Lam, Professor. OpenMP

CS 470 Spring Mike Lam, Professor. OpenMP CS 470 Spring 2017 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism

More information

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer OpenMP examples Sergeev Efim Senior software engineer Singularis Lab, Ltd. OpenMP Is: An Application Program Interface (API) that may be used to explicitly direct multi-threaded, shared memory parallelism.

More information

Multithreading in C with OpenMP

Multithreading in C with OpenMP Multithreading in C with OpenMP ICS432 - Spring 2017 Concurrent and High-Performance Programming Henri Casanova (henric@hawaii.edu) Pthreads are good and bad! Multi-threaded programming in C with Pthreads

More information

Shared Memory Programming Paradigm!

Shared Memory Programming Paradigm! Shared Memory Programming Paradigm! Ivan Girotto igirotto@ictp.it Information & Communication Technology Section (ICTS) International Centre for Theoretical Physics (ICTP) 1 Multi-CPUs & Multi-cores NUMA

More information

OpenMP Introduction. CS 590: High Performance Computing. OpenMP. A standard for shared-memory parallel programming. MP = multiprocessing

OpenMP Introduction. CS 590: High Performance Computing. OpenMP. A standard for shared-memory parallel programming. MP = multiprocessing CS 590: High Performance Computing OpenMP Introduction Fengguang Song Department of Computer Science IUPUI OpenMP A standard for shared-memory parallel programming. MP = multiprocessing Designed for systems

More information

Shared Memory Architecture

Shared Memory Architecture 1 Multiprocessor Architecture Single global memory space 0,, A 1 Physically partitioned into M physical devices All CPUs access full memory space via interconnection network CPUs communicate via shared

More information

Parallel programming using OpenMP

Parallel programming using OpenMP Parallel programming using OpenMP Computer Architecture J. Daniel García Sánchez (coordinator) David Expósito Singh Francisco Javier García Blas ARCOS Group Computer Science and Engineering Department

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Ekpe Okorafor School of Parallel Programming & Parallel Architecture for HPC ICTP October, 2014 A little about me! PhD Computer Engineering Texas A&M University Computer Science

More information

DPHPC: Introduction to OpenMP Recitation session

DPHPC: Introduction to OpenMP Recitation session SALVATORE DI GIROLAMO DPHPC: Introduction to OpenMP Recitation session Based on http://openmp.org/mp-documents/intro_to_openmp_mattson.pdf OpenMP An Introduction What is it? A set of compiler directives

More information

CS 470 Spring Mike Lam, Professor. OpenMP

CS 470 Spring Mike Lam, Professor. OpenMP CS 470 Spring 2018 Mike Lam, Professor OpenMP OpenMP Programming language extension Compiler support required "Open Multi-Processing" (open standard; latest version is 4.5) Automatic thread-level parallelism

More information

OpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means

OpenMP Programming. Prof. Thomas Sterling. High Performance Computing: Concepts, Methods & Means High Performance Computing: Concepts, Methods & Means OpenMP Programming Prof. Thomas Sterling Department of Computer Science Louisiana State University February 8 th, 2007 Topics Introduction Overview

More information

Shared Memory Parallelism - OpenMP

Shared Memory Parallelism - OpenMP Shared Memory Parallelism - OpenMP Sathish Vadhiyar Credits/Sources: OpenMP C/C++ standard (openmp.org) OpenMP tutorial (http://www.llnl.gov/computing/tutorials/openmp/#introduction) OpenMP sc99 tutorial

More information

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018 OpenMP 4 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Le Yan Scientific computing consultant User services group High Performance Computing @ LSU Goals Acquaint users with the concept of shared memory parallelism Acquaint users with

More information

DPHPC: Introduction to OpenMP Recitation session

DPHPC: Introduction to OpenMP Recitation session SALVATORE DI GIROLAMO DPHPC: Introduction to OpenMP Recitation session Based on http://openmp.org/mp-documents/intro_to_openmp_mattson.pdf OpenMP An Introduction What is it? A set

More information

OpenMP Overview. in 30 Minutes. Christian Terboven / Aachen, Germany Stand: Version 2.

OpenMP Overview. in 30 Minutes. Christian Terboven / Aachen, Germany Stand: Version 2. OpenMP Overview in 30 Minutes Christian Terboven 06.12.2010 / Aachen, Germany Stand: 03.12.2010 Version 2.3 Rechen- und Kommunikationszentrum (RZ) Agenda OpenMP: Parallel Regions,

More information

Heterogeneous Compu.ng using openmp lecture 2

Heterogeneous Compu.ng using openmp lecture 2 Heterogeneous Compu.ng using openmp lecture 2 F21DP Distributed and Parallel Technology Sven- Bodo Scholz Adding Insult to Injury int main() int i; double x, pi, sum [10] =0.0; int num_t; step = 1.0/(double)num_steps;

More information

OpenMP Programming. Aiichiro Nakano

OpenMP Programming. Aiichiro Nakano OpenMP Programming Aiichiro Nakano Collaboratory for Advanced Computing & Simulations Department of Computer Science Department of Physics & Astronomy Department of Chemical Engineering & Materials Science

More information

Parallel Computing Parallel Programming Languages Hwansoo Han

Parallel Computing Parallel Programming Languages Hwansoo Han Parallel Computing Parallel Programming Languages Hwansoo Han Parallel Programming Practice Current Start with a parallel algorithm Implement, keeping in mind Data races Synchronization Threading syntax

More information

OpenMP. OpenMP. Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum API for Fortran and C/C++

OpenMP. OpenMP. Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum API for Fortran and C/C++ OpenMP OpenMP Portable programming of shared memory systems. It is a quasi-standard. OpenMP-Forum 1997-2002 API for Fortran and C/C++ directives runtime routines environment variables www.openmp.org 1

More information

Parallel Programming using OpenMP

Parallel Programming using OpenMP 1 OpenMP Multithreaded Programming 2 Parallel Programming using OpenMP OpenMP stands for Open Multi-Processing OpenMP is a multi-vendor (see next page) standard to perform shared-memory multithreading

More information

Parallel Programming using OpenMP

Parallel Programming using OpenMP 1 Parallel Programming using OpenMP Mike Bailey mjb@cs.oregonstate.edu openmp.pptx OpenMP Multithreaded Programming 2 OpenMP stands for Open Multi-Processing OpenMP is a multi-vendor (see next page) standard

More information

Introduction to OpenMP.

Introduction to OpenMP. Introduction to OpenMP www.openmp.org Motivation Parallelize the following code using threads: for (i=0; i

More information

Alfio Lazzaro: Introduction to OpenMP

Alfio Lazzaro: Introduction to OpenMP First INFN International School on Architectures, tools and methodologies for developing efficient large scale scientific computing applications Ce.U.B. Bertinoro Italy, 12 17 October 2009 Alfio Lazzaro:

More information

Parallel Processing/Programming

Parallel Processing/Programming Parallel Processing/Programming with the applications to image processing Lectures: 1. Parallel Processing & Programming from high performance mono cores to multi- and many-cores 2. Programming Interfaces

More information

OpenMP - III. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen

OpenMP - III. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS15/16. HPAC, RWTH Aachen OpenMP - III Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS15/16 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT

More information

OpenMP Fundamentals Fork-join model and data environment

OpenMP Fundamentals Fork-join model and data environment www.bsc.es OpenMP Fundamentals Fork-join model and data environment Xavier Teruel and Xavier Martorell Agenda: OpenMP Fundamentals OpenMP brief introduction The fork-join model Data environment OpenMP

More information

Concurrent Programming with OpenMP

Concurrent Programming with OpenMP Concurrent Programming with OpenMP Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico October 11, 2012 CPD (DEI / IST) Parallel and Distributed

More information

Shared Memory Parallelism using OpenMP

Shared Memory Parallelism using OpenMP Indian Institute of Science Bangalore, India भ रत य व ज ञ न स स थ न ब गल र, भ रत SE 292: High Performance Computing [3:0][Aug:2014] Shared Memory Parallelism using OpenMP Yogesh Simmhan Adapted from: o

More information

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono

OpenMP Algoritmi e Calcolo Parallelo. Daniele Loiacono OpenMP Algoritmi e Calcolo Parallelo References Useful references Using OpenMP: Portable Shared Memory Parallel Programming, Barbara Chapman, Gabriele Jost and Ruud van der Pas OpenMP.org http://openmp.org/

More information

OpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system

OpenMP. A parallel language standard that support both data and functional Parallelism on a shared memory system OpenMP A parallel language standard that support both data and functional Parallelism on a shared memory system Use by system programmers more than application programmers Considered a low level primitives

More information

Data Environment: Default storage attributes

Data Environment: Default storage attributes COSC 6374 Parallel Computation Introduction to OpenMP(II) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel) Edgar Gabriel Fall 2014 Data Environment: Default storage attributes

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Christian Terboven 10.04.2013 / Darmstadt, Germany Stand: 06.03.2013 Version 2.3 Rechen- und Kommunikationszentrum (RZ) History De-facto standard for

More information

by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by using function call

by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by using function call OpenMP Syntax The OpenMP Programming Model Number of threads are determined by system default usually a thread per CPU or core using the environment variable OMP_NUM_THREADS from within the program by

More information

https://www.youtube.com/playlist?list=pllx- Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG

https://www.youtube.com/playlist?list=pllx- Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG https://www.youtube.com/playlist?list=pllx- Q6B8xqZ8n8bwjGdzBJ25X2utwnoEG OpenMP Basic Defs: Solution Stack HW System layer Prog. User layer Layer Directives, Compiler End User Application OpenMP library

More information

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018

OpenMP 2. CSCI 4850/5850 High-Performance Computing Spring 2018 OpenMP 2 CSCI 4850/5850 High-Performance Computing Spring 2018 Tae-Hyuk (Ted) Ahn Department of Computer Science Program of Bioinformatics and Computational Biology Saint Louis University Learning Objectives

More information

<Insert Picture Here> OpenMP on Solaris

<Insert Picture Here> OpenMP on Solaris 1 OpenMP on Solaris Wenlong Zhang Senior Sales Consultant Agenda What s OpenMP Why OpenMP OpenMP on Solaris 3 What s OpenMP Why OpenMP OpenMP on Solaris

More information

CS691/SC791: Parallel & Distributed Computing

CS691/SC791: Parallel & Distributed Computing CS691/SC791: Parallel & Distributed Computing Introduction to OpenMP 1 Contents Introduction OpenMP Programming Model and Examples OpenMP programming examples Task parallelism. Explicit thread synchronization.

More information

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP)

HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP) HPC Practical Course Part 3.1 Open Multi-Processing (OpenMP) V. Akishina, I. Kisel, G. Kozlov, I. Kulakov, M. Pugach, M. Zyzak Goethe University of Frankfurt am Main 2015 Task Parallelism Parallelization

More information

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen

OpenMP I. Diego Fabregat-Traver and Prof. Paolo Bientinesi WS16/17. HPAC, RWTH Aachen OpenMP I Diego Fabregat-Traver and Prof. Paolo Bientinesi HPAC, RWTH Aachen fabregat@aices.rwth-aachen.de WS16/17 OpenMP References Using OpenMP: Portable Shared Memory Parallel Programming. The MIT Press,

More information

Shared Memory Programming with OpenMP

Shared Memory Programming with OpenMP Shared Memory Programming with OpenMP Moreno Marzolla Dip. di Informatica Scienza e Ingegneria (DISI) Università di Bologna moreno.marzolla@unibo.it Copyright 2013, 2014, 2017 2019 Moreno Marzolla, Università

More information

Introduc)on to OpenMP

Introduc)on to OpenMP Introduc)on to OpenMP Chapter 5.1-5. Bryan Mills, PhD Spring 2017 OpenMP An API for shared-memory parallel programming. MP = multiprocessing Designed for systems in which each thread or process can potentially

More information

Synchronisation in Java - Java Monitor

Synchronisation in Java - Java Monitor Synchronisation in Java - Java Monitor -Every object and class is logically associated with a monitor - the associated monitor protects the variable in the object/class -The monitor of an object/class

More information

A Short Introduction to OpenMP. Mark Bull, EPCC, University of Edinburgh

A Short Introduction to OpenMP. Mark Bull, EPCC, University of Edinburgh A Short Introduction to OpenMP Mark Bull, EPCC, University of Edinburgh Overview Shared memory systems Basic Concepts in Threaded Programming Basics of OpenMP Parallel regions Parallel loops 2 Shared memory

More information

Session 4: Parallel Programming with OpenMP

Session 4: Parallel Programming with OpenMP Session 4: Parallel Programming with OpenMP Xavier Martorell Barcelona Supercomputing Center Agenda Agenda 10:00-11:00 OpenMP fundamentals, parallel regions 11:00-11:30 Worksharing constructs 11:30-12:00

More information

Introduc4on to OpenMP and Threaded Libraries Ivan Giro*o

Introduc4on to OpenMP and Threaded Libraries Ivan Giro*o Introduc4on to OpenMP and Threaded Libraries Ivan Giro*o igiro*o@ictp.it Informa(on & Communica(on Technology Sec(on (ICTS) Interna(onal Centre for Theore(cal Physics (ICTP) OUTLINE Shared Memory Architectures

More information

1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008

1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008 1 of 6 Lecture 7: March 4 CISC 879 Software Support for Multicore Architectures Spring 2008 Lecture 7: March 4, 2008 Lecturer: Lori Pollock Scribe: Navreet Virk Open MP Programming Topics covered 1. Introduction

More information

Introduction to. Slides prepared by : Farzana Rahman 1

Introduction to. Slides prepared by : Farzana Rahman 1 Introduction to OpenMP Slides prepared by : Farzana Rahman 1 Definition of OpenMP Application Program Interface (API) for Shared Memory Parallel Programming Directive based approach with library support

More information

HPCSE - I. «OpenMP Programming Model - Part I» Panos Hadjidoukas

HPCSE - I. «OpenMP Programming Model - Part I» Panos Hadjidoukas HPCSE - I «OpenMP Programming Model - Part I» Panos Hadjidoukas 1 Schedule and Goals 13.10.2017: OpenMP - part 1 study the basic features of OpenMP able to understand and write OpenMP programs 20.10.2017:

More information

Advanced C Programming Winter Term 2008/09. Guest Lecture by Markus Thiele

Advanced C Programming Winter Term 2008/09. Guest Lecture by Markus Thiele Advanced C Programming Winter Term 2008/09 Guest Lecture by Markus Thiele Lecture 14: Parallel Programming with OpenMP Motivation: Why parallelize? The free lunch is over. Herb

More information

Using OpenMP. Shaohao Chen Research Boston University

Using OpenMP. Shaohao Chen Research Boston University Using OpenMP Shaohao Chen Research Computing @ Boston University Outline Introduction to OpenMP OpenMP Programming Parallel constructs Work-sharing constructs Basic clauses Synchronization constructs Advanced

More information

COSC 6374 Parallel Computation. Introduction to OpenMP. Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

COSC 6374 Parallel Computation. Introduction to OpenMP. Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel) COSC 6374 Parallel Computation Introduction to OpenMP Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel) Edgar Gabriel Fall 2015 OpenMP Provides thread programming model at a

More information

Introduction to Standard OpenMP 3.1

Introduction to Standard OpenMP 3.1 Introduction to Standard OpenMP 3.1 Massimiliano Culpo - m.culpo@cineca.it Gian Franco Marras - g.marras@cineca.it CINECA - SuperComputing Applications and Innovation Department 1 / 59 Outline 1 Introduction

More information

OPENMP TIPS, TRICKS AND GOTCHAS

OPENMP TIPS, TRICKS AND GOTCHAS OPENMP TIPS, TRICKS AND GOTCHAS OpenMPCon 2015 2 Directives Mistyping the sentinel (e.g.!omp or #pragma opm ) typically raises no error message. Be careful! Extra nasty if it is e.g. #pragma opm atomic

More information

Parallel Programming in C with MPI and OpenMP

Parallel Programming in C with MPI and OpenMP Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 17 Shared-memory Programming 1 Outline n OpenMP n Shared-memory model n Parallel for loops n Declaring private variables n Critical

More information

Distributed Systems + Middleware Concurrent Programming with OpenMP

Distributed Systems + Middleware Concurrent Programming with OpenMP Distributed Systems + Middleware Concurrent Programming with OpenMP Gianpaolo Cugola Dipartimento di Elettronica e Informazione Politecnico, Italy cugola@elet.polimi.it http://home.dei.polimi.it/cugola

More information

OpenMP on Ranger and Stampede (with Labs)

OpenMP on Ranger and Stampede (with Labs) OpenMP on Ranger and Stampede (with Labs) Steve Lantz Senior Research Associate Cornell CAC Parallel Computing at TACC: Ranger to Stampede Transition November 6, 2012 Based on materials developed by Kent

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Xiaoxu Guan High Performance Computing, LSU April 6, 2016 LSU HPC Training Series, Spring 2016 p. 1/44 Overview Overview of Parallel Computing LSU HPC Training Series, Spring 2016

More information

Scientific Computing

Scientific Computing Lecture on Scientific Computing Dr. Kersten Schmidt Lecture 20 Technische Universität Berlin Institut für Mathematik Wintersemester 2014/2015 Syllabus Linear Regression, Fast Fourier transform Modelling

More information

!OMP #pragma opm _OPENMP

!OMP #pragma opm _OPENMP Advanced OpenMP Lecture 12: Tips, tricks and gotchas Directives Mistyping the sentinel (e.g.!omp or #pragma opm ) typically raises no error message. Be careful! The macro _OPENMP is defined if code is

More information

Introduction to OpenMP

Introduction to OpenMP Introduction to OpenMP Le Yan HPC Consultant User Services Goals Acquaint users with the concept of shared memory parallelism Acquaint users with the basics of programming with OpenMP Discuss briefly the

More information

Department of Informatics V. HPC-Lab. Session 2: OpenMP M. Bader, A. Breuer. Alex Breuer

Department of Informatics V. HPC-Lab. Session 2: OpenMP M. Bader, A. Breuer. Alex Breuer HPC-Lab Session 2: OpenMP M. Bader, A. Breuer Meetings Date Schedule 10/13/14 Kickoff 10/20/14 Q&A 10/27/14 Presentation 1 11/03/14 H. Bast, Intel 11/10/14 Presentation 2 12/01/14 Presentation 3 12/08/14

More information

ME759 High Performance Computing for Engineering Applications

ME759 High Performance Computing for Engineering Applications ME759 High Performance Computing for Engineering Applications Parallel Computing on Multicore CPUs October 25, 2013 Dan Negrut, 2013 ME964 UW-Madison A programming language is low level when its programs

More information

ECE/ME/EMA/CS 759 High Performance Computing for Engineering Applications

ECE/ME/EMA/CS 759 High Performance Computing for Engineering Applications ECE/ME/EMA/CS 759 High Performance Computing for Engineering Applications Variable Sharing in OpenMP OpenMP synchronization issues OpenMP performance issues November 4, 2015 Lecture 22 Dan Negrut, 2015

More information

Parallel and Distributed Programming. OpenMP

Parallel and Distributed Programming. OpenMP Parallel and Distributed Programming OpenMP OpenMP Portability of software SPMD model Detailed versions (bindings) for different programming languages Components: directives for compiler library functions

More information