CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency

Similar documents
CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP

Tieing the Threads Together

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread- Level Parallelism (TLP) and OpenMP

Parallel Programming with OpenMP. CS240A, T. Yang

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread- Level Parallelism (TLP) and OpenMP. Serializes the execunon of a thread

CS 61C: Great Ideas in Computer Architecture. Synchronization, OpenMP

CS 61C: Great Ideas in Computer Architecture Lecture 20: Thread- Level Parallelism (TLP) and OpenMP Part 2

CS 61C: Great Ideas in Computer Architecture Lecture 20: Thread- Level Parallelism (TLP) and OpenMP Part 2

CS 61C: Great Ideas in Computer Architecture. Amdahl s Law, Thread Level Parallelism

CS3350B Computer Architecture

You Are Here! Peer Instruction: Synchronization. Synchronization in MIPS. Lock and Unlock Synchronization 7/19/2011

Instructor: Randy H. Katz hbp://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #16. Warehouse Scale Computer

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors

CS 61C: Great Ideas in Computer Architecture. OpenMP, Transistors

Review!of!Last!Lecture! CS!61C:!Great!Ideas!in!! Computer!Architecture! Synchroniza+on,- OpenMP- Great!Idea!#4:!Parallelism! Agenda!

CS 110 Computer Architecture

CS 61C: Great Ideas in Computer Architecture. Synchroniza+on, OpenMP. Senior Lecturer SOE Dan Garcia

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread Level Parallelism: OpenMP

CS 61C: Great Ideas in Computer Architecture Thread Level Parallelism (TLP)

Lecture 21 Thread Level Parallelism II!

Agenda. Cache-Memory Consistency? (1/2) 7/14/2011. New-School Machine Structures (It s a bit more complicated!)

Introduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines

Data Environment: Default storage attributes

Computer Systems Architecture

Multiprocessors and Thread-Level Parallelism. Department of Electrical & Electronics Engineering, Amrita School of Engineering

Multiple Issue and Static Scheduling. Multiple Issue. MSc Informatics Eng. Beyond Instruction-Level Parallelism

10/19/17. You Are Here! Review: Direct-Mapped Cache. Typical Memory Hierarchy

2 GHz = 500 picosec frequency. Vars declared outside of main() are in static. 2 # oset bits = block size Put starting arrow in FSM diagrams

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 3

EN164: Design of Computing Systems Lecture 34: Misc Multi-cores and Multi-processors

CS 590: High Performance Computing. Parallel Computer Architectures. Lab 1 Starts Today. Already posted on Canvas (under Assignment) Let s look at it

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013

Introduction to OpenMP.

Computer Systems Architecture

CS 61C: Great Ideas in Computer Architecture Direct- Mapped Caches. Increasing distance from processor, decreasing speed.

Chapter 5. Multiprocessors and Thread-Level Parallelism

DPHPC: Introduction to OpenMP Recitation session

Lecture 2 A Hand-on Introduction to OpenMP

CMSC Computer Architecture Lecture 12: Multi-Core. Prof. Yanjing Li University of Chicago

Multiprocessors & Thread Level Parallelism

MULTIPROCESSORS AND THREAD-LEVEL. B649 Parallel Architectures and Programming

Lecture 2. Memory locality optimizations Address space organization

ECE 574 Cluster Computing Lecture 10

MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming

Shared Memory Architecture

Shared Memory Programming Model

OpenMP examples. Sergeev Efim. Singularis Lab, Ltd. Senior software engineer

Serial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing

COSC 6374 Parallel Computation. Introduction to OpenMP. Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

Issues in Parallel Processing. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

CS 61C: Great Ideas in Computer Architecture Lecture 19: Thread- Level Parallelism and OpenMP Intro

OpenMP 4. CSCI 4850/5850 High-Performance Computing Spring 2018

DPHPC: Introduction to OpenMP Recitation session

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 2

A common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...

10/16/17. Outline. Outline. Typical Memory Hierarchy. Adding Cache to Computer. Key Cache Concepts

Shared Symmetric Memory Systems

Performance Issues in Parallelization Saman Amarasinghe Fall 2009

A common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...

Online Course Evaluation. What we will do in the last week?

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread Level Parallelism

Chap. 4 Multiprocessors and Thread-Level Parallelism

High-Performance Parallel Computing

1 of 6 Lecture 7: March 4. CISC 879 Software Support for Multicore Architectures Spring Lecture 7: March 4, 2008

Computer Science 146. Computer Architecture

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches, Set Associative Caches, Cache Performance

Computer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture. Lecture 9: Multiprocessors

CS 61C: Great Ideas in Computer Architecture. The Flynn Taxonomy, Intel SIMD Instructions

Parallel Numerical Algorithms

11/2/17. Agenda. Example: CPU with Two Cores. Parallel Computer Architectures. Improving Performance. CS 61C: Great Ideas in Computer Architecture

CS4961 Parallel Programming. Lecture 14: Reasoning about Performance 10/07/2010. Administrative: What s Coming. Mary Hall October 7, 2010

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

CS 5220: Shared memory programming. David Bindel

OpenMP: The "Easy" Path to Shared Memory Computing

Introduction to parallel computers and parallel programming. Introduction to parallel computersand parallel programming p. 1

CS61C : Machine Structures

CS/COE1541: Intro. to Computer Architecture

Multiprocessors. Flynn Taxonomy. Classifying Multiprocessors. why would you want a multiprocessor? more is better? Cache Cache Cache.

Lect. 2: Types of Parallelism

Performance Issues in Parallelization. Saman Amarasinghe Fall 2010

Review of Last Lecture. CS 61C: Great Ideas in Computer Architecture. The Flynn Taxonomy, Intel SIMD Instructions. Great Idea #4: Parallelism.

Multi-threaded processors. Hung-Wei Tseng x Dean Tullsen

UvA-SARA High Performance Computing Course June Clemens Grelck, University of Amsterdam. Parallel Programming with Compiler Directives: OpenMP

Parallel Processing SIMD, Vector and GPU s cont.

Introduction to OpenMP

Prof. Hakim Weatherspoon CS 3410, Spring 2015 Computer Science Cornell University. P & H Chapter 4.10, 1.7, 1.8, 5.10, 6

CS425 Computer Systems Architecture

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

OpenMP. António Abreu. Instituto Politécnico de Setúbal. 1 de Março de 2013

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

COSC 6385 Computer Architecture - Thread Level Parallelism (I)

Multiprocessors and Locking

CSE 392/CS 378: High-performance Computing - Principles and Practice

COSC 6374 Parallel Computation. Introduction to OpenMP(I) Some slides based on material by Barbara Chapman (UH) and Tim Mattson (Intel)

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

CS 654 Computer Architecture Summary. Peter Kemper

CSE 141 Spring 2016 Homework 5 PID: Name: 1. Consider the following matrix transpose code int i, j,k; double *A, *B, *C; A = (double

Transcription:

CS 61C: Great Ideas in Computer Architecture TLP and Cache Coherency Instructor: Bernhard Boser and Randy H. Katz http://inst.eecs.berkeley.edu/~cs61c/fa16 11/10/16 Fall 2016 -- Lecture #20 1

New-School Machine Structures Software Parallel Requests Assigned to computer e.g., Search Katz Parallel Threads Assigned to core e.g., Lookup, Ads Parallel Instructions >1 instruction @ one time e.g., 5 pipelined instructions Parallel Data >1 data item @ one time e.g., Add of 4 pairs of words Hardware descriptions All gates @ one time Programming Languages Hardware Warehouse Scale Computer Harness Parallelism & Achieve High Performance Core Memory Input/Output Instruction Unit(s) Cache Memory Core 11/10/16 Fall 2016 -- Lecture #20 2 Computer (Cache) Core Functional Unit(s) A 0 +B 0 A 1 +B 1 A 2 +B 2 A 3 +B 3 Smart Phone Logic Gates

Agenda Review: Thread Level Parallelism Open MP Reduction Multiprocessor Cache Coherency False Sharing And in Conclusion, 11/10/16 Fall 2016 -- Lecture #20 3

Agenda Review: Thread Level Parallelism Open MP Reduction Multiprocessor Cache Coherency False Sharing And in Conclusion, 11/10/16 Fall 2016 -- Lecture #20 4

ILP vs. TLP Instruction Level Parallelism Multiple instructions in execution at the same time, e.g., instruction pipelining Superscalar: launch more than one instruction at a time, typically from one instruction stream ILP limited because of pipeline hazards 11/10/16 Fall 2016 -- Lecture #20 5

ILP vs. TLP Thread Level Parallelism Thread: sequence of instructions, with own program counter and processor state (e.g., register file) Multicore: Physical CPU: One thread (at a time) per CPU, in software OS switches threads typically in response to I/O events like disk read/write Logical CPU: Fine-grain thread switching, in hardware, when thread blocks due to cache miss/memory access Hyperthreading (aka Simultaneous Multithreading--SMT): Exploit superscalar architecture to launch instructions from different threads at the same time! 11/10/16 Fall 2016 -- Lecture #20 6

SMT (HT): Logical CPUs > Physical CPUs Run multiple threads at the same time per core Each thread has own architectural state (PC, Registers, Conditions) Share resources (cache, instruction unit, execution units) Improves Core CPI (clock ticks per instruction) May degrade Thread CPI (Utilization/Bandwidth v. Latency) See http://dada.cs.washington.edu/smt/ 11/10/16 Fall 2016 -- Lecture #20 7

11/10/16 Fall 2016 -- Lecture #20 8

(Chip) Multicore Multiprocessor SMP: (Shared Memory) Symmetric Multiprocessor Two or more identical CPUs/Cores Single shared coherent memory 11/10/16 Fall 2016 -- Lecture #20 9

Randy s (Old) Mac Air /usr/sbin/sysctl -a grep hw\. hw.model = Core i7, 4650U hw.cachelinesize = 64 hw.l1icachesize: 32,768 hw.physicalcpu: 2 hw.l1dcachesize: 32,768 hw.logicalcpu: 4 hw.l2cachesize: 262,144 hw.l3cachesize: 4,194,304 hw.cpufrequency = 1,700,000,000 hw.physmem = 8,589,934,592 (8 Gbytes)... 11/10/16 Fall 2016 -- Lecture #20 10

Hive Machines hw.model = Core i7 4770K hw.physicalcpu: 4 hw.logicalcpu: 8 hw.cpufrequency = 3,900,000,000 hw.physmem = 34,359,738,368 hw.cachelinesize = 64 hw.l1icachesize: 32,768 hw.l1dcachesize: 32,768 hw.l2cachesize: 262,144 hw.l3cachesize: 8,388,608 Therefore, should try up to 8 threads to see if performance gain even though only 4 cores 11/10/16 Fall 2016 -- Lecture #20 11

Why Parallelism? Only path to performance is parallelism Clock rates flat or declining SIMD: 2X width every 3-4 years AVX-512 2015, 1024b in 2018? 2019? MIMD: Add 2 cores every 2 years (2, 4, 6, 8, 10, ) Intel Broadwell-Extreme (2Q16): 10 Physical CPUs, 20 Logical CPUs Key challenge: craft parallel programs with high performance on multiprocessors as # of processors increase i.e., that scale Scheduling, load balancing, time for synchronization, overhead for communication Project #4: fastest code on 8 processor computer 2 logical CPUs/core, 8 cores/computer 11/10/16 Fall 2016 -- Lecture #20 12

Agenda Review: Thread Level Parallelism Open MP Reduction Multiprocessor Cache Coherency False Sharing And in Conclusion, 11/10/16 Fall 2016 -- Lecture #20 13

Review: OpenMP Building Block: for loop for (i=0; i<max; i++) zero[i] = 0; Breaks for loop into chunks, and allocate each to a separate thread e.g. if max = 100 with 2 threads: assign 0-49 to thread 0, and 50-99 to thread 1 Must have relatively simple shape for an OpenMPaware compiler to be able to parallelize it Necessary for the run-time system to be able to determine how many of the loop iterations to assign to each thread No premature exits from the loop allowed i.e. No break, return, exit, goto statements In general, don t jump outside of any pragma block 11/10/16 Fall 2016 -- Lecture #20 14

Review: OpenMP Parallel for pragma #pragma omp parallel for for (i=0; i<max; i++) zero[i] = 0; Master thread creates additional threads, each with a separate execution context All variables declared outside for loop are shared by default, except for loop index which is implicitly private per thread Implicit barrier synchronization at end of for loop Divide index regions sequentially per thread Thread 0 gets 0, 1,, (max/n)-1; Thread 1 gets max/n, max/n+1,, 2*(max/n)-1 Why? 15

OpenMP Timing Elapsed wall clock time: double omp_get_wtime(void); Returns elapsed wall clock time in seconds Time is measured per thread, no guarantee can be made that two distinct threads measure the same time Time is measured from some time in the past, so subtract results of two calls to omp_get_wtime to get elapsed time 11/10/16 Fall 2016 -- Lecture #20 16

Matrix Multiply in OpenMP // C[M][N] = A[M][P] B[P][N] start_time = omp_get_wtime(); #pragma omp parallel for private(tmp, j, k) for (i=0; i<m; i++){ for (j=0; j<n; j++){ tmp = 0.0; for( k=0; k<p; k++){ Outer loop spread across N threads; inner loops inside a single thread /* C(i,j) = sum(over k) A(i,k) * B(k,j)*/ tmp += A[i][k] * B[k][j]; } C[i][j] = tmp; } } run_time = omp_get_wtime() - start_time; 11/10/16 Fall 2016 -- Lecture #20 17

Matrix Multiply in Open MP More performance optimizations available: Higher compiler optimization (-O2, -O3) to reduce number of instructions executed Cache blocking to improve memory performance Using SIMD AVX instructions to raise floating point computation rate (DLP) 11/10/16 Fall 2016 -- Lecture #20 18

New: OpenMP Reduction double avg, sum=0.0, A[MAX]; int i; #pragma omp parallel for private ( sum ) for (i = 0; i <= MAX ; i++) sum += A[i]; avg = sum/max; // bug Problem is that we really want sum over all threads! Reduction: specifies that 1 or more variables that are private to each thread are subject of reduction operation at end of parallel region: reduction(operation:var) where Operation: operator to perform on the variables (var) at the end of the parallel region Var: One or more variables on which to perform scalar reduction. double avg, sum=0.0, A[MAX]; int i; #pragma omp for reduction(+ : sum) for (i = 0; i <= MAX ; i++) sum += A[i]; avg = sum/max; 11/10/16 Fall 2016 -- Lecture #20 19

Review: Calculating π Original Version #include <omp.h> #define NUM_THREADS 4 static long num_steps = 100000; double step; void main () { int i; double x, pi, sum[num_threads]; step = 1.0/(double) num_steps; #pragma omp parallel private ( i, x ) { int id = omp_get_thread_num(); for (i=id, sum[id]=0.0; i< num_steps; i=i+num_threads) { x = (i+0.5)*step; sum[id] += 4.0/(1.0+x*x); } } for(i=1; i<num_threads; i++) sum[0] += sum[i]; pi = sum[0] / num_steps printf ("pi = %6.12f\n", pi); } 11/10/16 Fall 2016 -- Lecture #20 20

Version 2: parallel for, reduction #include <omp.h> #include <stdio.h> /static long num_steps = 100000; double step; void main () { int i; double x, pi, sum = 0.0; step = 1.0/(double) num_steps; #pragma omp parallel for private(x) reduction(+:sum) for (i=1; i<= num_steps; i++){ } x = (i-0.5)*step; sum = sum + 4.0/(1.0+x*x); pi = sum / num_steps; printf ("pi = %6.8f\n", pi); } 11/10/16 Fall 2016 -- Lecture #20 21

Administrivia Midterm II graded and regrade period open until Monday@11:59:59 PM Mean similar to Midterm I Approximately 60% of points What do about Friday labs and Veterans Day? W before Thanksgiving? R&R Week? Make ups for Friday labs posted on Piazza Project #4-1 released 11/10/16 Fall 2016 -- Lecture #20 22

Agenda Review: Thread Level Parallelism Open MP Reduction Multiprocessor Cache Coherency False Sharing And in Conclusion, 11/10/16 Fall 2016 -- Lecture #20 23

Multiprocessor Key Questions Q1 How do they share data? Q2 How do they coordinate? Q3 How many processors can be supported? 11/10/16 Fall 2016 -- Lecture #20 24

Shared Memory Multiprocessor (SMP) Q1 Single address space shared by all processors/cores Q2 Processors coordinate/communicate through shared variables in memory (via loads and stores) Use of shared data must be coordinated via synchronization primitives (locks) that allow access to data to only one processor at a time All multicore computers today are SMP 11/10/16 Fall 2016 -- Lecture #20 25

Parallel Processing: Multiprocessor Systems (MIMD) Multiprocessor (MIMD): a computer system with at least 2 processors Processor Processor Processor Cache Cache Cache Interconnection Network Memory I/O 1. Deliver high throughput for independent jobs via job-level parallelism 2. Improve the run time of a single program that has been specially crafted to run on a multiprocessor - a parallel processing program Now Use term core for processor ( Multicore ) because Multiprocessor Microprocessor too redundant 11/10/16 Fall 2016 -- Lecture #20 26

Multiprocessor Caches Memory is a performance bottleneck even with one processor Use caches to reduce bandwidth demands on main memory Each core has a local private cache holding data it has accessed recently Only cache misses have to access the shared common memory Processor Processor Processor Cache Cache Cache Interconnection Network Memory I/O 11/10/16 Fall 2016 -- Lecture #20 27

Shared Memory and Caches What if? Processors 1 and 2 read Memory[1000] (value 20) Processor 0 Processor1 Processor2 1000 1000 1000 1000 Cache Cache Cache Interconnection Network Memory 20 20 I/O 11/10/16 Fall 2016 -- Lecture #20 28

Now: Shared Memory and Caches Processor 0 writes Memory[1000] with 40 1000 Processor 0 Processor1 Processor2 1000 20 1000 20 1000 Cache40 Cache Cache Interconnection Network 1000 Memory 40 I/O Problem? 11/10/16 Fall 2016 -- Lecture #20 29

Keeping Multiple Caches Coherent Architect s job: shared memory => keep cache values coherent Idea: When any processor has cache miss or writes, notify other processors via interconnection network If only reading, many processors can have copies If a processor writes, invalidate any other copies Write transactions from one processor, other caches snoop the common interconnect checking for tags they hold Invalidate any copies of same address modified in other cache 11/10/16 Fall 2016 -- Lecture #20 30

How Does HW Keep $ Coherent? Each cache tracks state of each block in cache: 1. Shared: up-to-date data, other caches may have a copy 2. Modified: up-to-date data, changed (dirty), no other cache has a copy, OK to write, memory out-of-date 11/10/16 Fall 2011 -- Lecture #20 31

Two Optional Performance Optimizations of Cache Coherency via New States Each cache tracks state of each block in cache: 3. Exclusive: up-to-date data, no other cache has a copy, OK to write, memory up-to-date Avoids writing to memory if block replaced Supplies data on read instead of going to memory 4. Owner: up-to-date data, other caches may have a copy (they must be in Shared state) Only cache that supplies data on read instead of going to memory 11/10/16 Fall 2011 -- Lecture #20 32

Name of Common Cache Coherency Protocol: MOESI Memory access to cache is either Modified (in cache) Owned (in cache) Exclusive (in cache) Shared (in cache) Invalid (not in cache) Snooping/Snoopy Protocols e.g., the Berkeley Ownership Protocol See http://en.wikipedia.org/wiki/cache_coherence Berkeley Protocol is a wikipedia stub! 11/10/16 Fall 2011 -- Lecture #20 33

Shared Memory and Caches Example, now with cache coherence Processors 1 and 2 read Memory[1000] Processor 0 writes Memory[1000] with 40 1000 Processor 0 Processor1 Processor2 1000 20 1000 20 1000 Cache40 Cache Cache Interconnection Network Processor 0 Write Invalidates Other Copies 1000 Memory 40 I/O 11/10/16 Fall 2016 -- Lecture #20 34

11/10/16 Fall 2016 - Lecture #20 35

Agenda Review: Thread Level Parallelism Open MP Reduction Multiprocessor Cache Coherency False Sharing And in Conclusion, 11/10/16 Fall 2016 -- Lecture #20 36

Cache Coherency Tracked by Block Processor 0 Processor 1 4000 4000 4004 4008 4012 4016 4028 Tag 32-Byte Data Block Suppose block size is 32 bytes Suppose Processor 0 reading and writing variable X, Processor 1 reading and writing variable Y Suppose in X location 4000, Y in 4012 What will happen? Cache 0 Cache 1 Memory 11/10/16 Fall 2016 -- Lecture #20 37

Coherency Tracked by Cache Block Block ping-pongs between two caches even though processors are accessing disjoint variables Effect called false sharing How can you prevent it? 11/10/16 Fall 2016 -- Lecture #20 38

Remember The 3Cs? Compulsory (cold start or process migration, 1 st reference): First access to block, impossible to avoid; small effect for long-running programs Solution: increase block size (increases miss penalty; very large blocks could increase miss rate) Capacity (not compulsory and ) Cache cannot contain all blocks accessed by the program even with perfect replacement policy in fully associative cache Solution: increase cache size (may increase access time) Conflict (not compulsory or capacity and ): Multiple memory locations map to the same cache location Solution 1: increase cache size Solution 2: increase associativity (may increase access time) Solution 3: improve replacement policy, e.g.. LRU 11/10/16 Fall 2016 -- Lecture #20 39

Fourth C of Cache Misses! Coherence Misses Misses caused by coherence traffic with other processor Also known as communication misses because represents data moving between processors working together on a parallel program For some parallel programs, coherence misses can dominate total misses 11/10/16 Fall 2016 -- Lecture #20 40

False Sharing in OpenMP int i; double x, pi, sum[num_threads]; #pragma omp parallel private (i, x) { int id = omp_get_thread_num(); } for (i=id, sum[id]=0.0; i< num_steps; i=i+num_thread) { } x = (i+0.5)*step; sum[id] += 4.0/(1.0+x*x); What is problem? Sum[0] is 8 bytes in memory, Sum[1] is adjacent 8 bytes in memory => false sharing if block size > 8 bytes 11/10/16 Fall 2016 -- Lecture #20 41

Peer Instruction: Avoid False Sharing { int i; double x, pi, sum[10000]; #pragma omp parallel private (i, x) { int id = omp_get_thread_num(), fix = ; for (i=id, sum[id]=0.0; i< num_steps; i=i+num_threads) { x = (i+0.5)*step; sum[id*fix] += 4.0/(1.0+x*x); What } is best value to set fix to prevent false sharing? (A) omp_get_num_threads(); (B) Constant for number of blocks in cache (C) Constant for size of blocks in bytes (D) Constant for size of blocks in doubles 11/10/16 Fall 2016 -- Lecture #20 42

Review: Data Races and Synchronization Two memory accesses form a data race if from different threads access same location, at least one is a write, and they occur one after another If there is a data race, result of program varies depending on chance (which thread first?) Avoid data races by synchronizing writing and reading to get deterministic behavior Synchronization done by user-level routines that rely on hardware synchronization instructions 11/10/16 Fall 2016 -- Lecture #20 43

Review: Synchronization in MIPS Load linked: ll $rt, off($rs) Reads memory location (like lw) Also sets (hidden) link bit Link bit is reset if memory location (off($rs)) is accessed Store conditional: sc $rt, off($rs) Stores off($rs) = $rt (like sw) Sets $rt=1 (success) if link bit is set i.e. no (other) process accessed off($rs) since ll Sets $rt=0 (failure) otherwise Note: sc clobbers $rt, i.e. changes its value 11/10/16 Fall 2016 -- Lecture #20 44

Review: Lock Synchronization Broken Synchronization while (lock!= 0) ; lock = 1; // critical section lock = 0; $t0 = 1 before calling ll: minimize time between ll and sc Try again if sc failed (another thread executed sc since above ll) Fix (lock is at location $s1) Try: addiu $t0,$zero,1 ll $t1,0($s1) bne $t1,$zero,try sc $t0,0($s1) beq $t0,$zero,try Locked: # critical section Unlock: sw $zero,0($s1) 11/10/16 Fall 2016 -- Lecture #20 45

Deadlock Deadlock: a system state in which no progress is possible Dining Philosopher s Problem: Think until the left fork is available; when it is, pick it up Think until the right fork is available; when it is, pick it up When both forks are held, eat for a fixed amount of time Then, put the right fork down Then, put the left fork down Repeat from the beginning Solution? 11/10/16 Fall 2016 -- Lecture #20 46

Agenda Review: Thread Level Parallelism Open MP Reduction Multiprocessor Cache Coherency False Sharing And in Conclusion, 11/10/16 Fall 2016 -- Lecture #20 47

And in Conclusion, OpenMP as simple parallel extension to C Threads level programming with parallel for pragma, private variables, reductions, C: small so easy to learn, but not very high level and it s easy to get into trouble ILP vs. TLP CMP (Chip Multiprocessor aka Symmetric Multiprocessor) vs. SMT (Simultaneous Multithreading Cache coherency implements shared memory even with multiple copies in multiple caches False sharing a concern; watch block size! 48