Computer Architecture

Similar documents
Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

Chapter 5. Multiprocessors and Thread-Level Parallelism

Chapter 5. Multiprocessors and Thread-Level Parallelism

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

MULTIPROCESSORS AND THREAD-LEVEL. B649 Parallel Architectures and Programming

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

COEN-4730 Computer Architecture Lecture 08 Thread Level Parallelism and Coherence

Multiprocessors & Thread Level Parallelism

Overview: Shared Memory Hardware. Shared Address Space Systems. Shared Address Space and Shared Memory Computers. Shared Memory Hardware

Overview: Shared Memory Hardware

Data Processing on Modern Hardware

Shared Symmetric Memory Systems

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy Prof. Sherief Reda School of Engineering Brown University

Introduction to Multiprocessors (Part II) Cristina Silvano Politecnico di Milano

Introduction. Coherency vs Consistency. Lec-11. Multi-Threading Concepts: Coherency, Consistency, and Synchronization

Parallel Computer Architecture Spring Distributed Shared Memory Architectures & Directory-Based Memory Coherence

10 Parallel Organizations: Multiprocessor / Multicore / Multicomputer Systems

MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM (PART 1)

Parallel Architecture. Hwansoo Han

MULTIPROCESSORS AND THREAD LEVEL PARALLELISM

Review: Multiprocessor. CPE 631 Session 21: Multiprocessors (Part 2) Potential HW Coherency Solutions. Bus Snooping Topology

Lecture 24: Multiprocessing Computer Architecture and Systems Programming ( )

EC 513 Computer Architecture

Cache Coherence. Bryan Mills, PhD. Slides provided by Rami Melhem

Chapter 5 Thread-Level Parallelism. Abdullah Muzahid

Shared Memory Multiprocessors. Symmetric Shared Memory Architecture (SMP) Cache Coherence. Cache Coherence Mechanism. Interconnection Network

Flynn s Classification

Handout 3 Multiprocessor and thread level parallelism

Shared Memory SMP and Cache Coherence (cont) Adapted from UCB CS252 S01, Copyright 2001 USB

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013

Page 1. Lecture 12: Multiprocessor 2: Snooping Protocol, Directory Protocol, Synchronization, Consistency. Bus Snooping Topology

Directory Implementation. A High-end MP

Chapter 8. Multiprocessors. In-Cheol Park Dept. of EE, KAIST

Chapter 5. Thread-Level Parallelism

Aleksandar Milenkovich 1

Lecture 11: Snooping Cache Coherence: Part II. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Multiprocessors. Flynn Taxonomy. Classifying Multiprocessors. why would you want a multiprocessor? more is better? Cache Cache Cache.

Multiprocessors and Thread-Level Parallelism. Department of Electrical & Electronics Engineering, Amrita School of Engineering

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy

Computer Architecture

CMSC 611: Advanced. Distributed & Shared Memory

Computer Systems Architecture

4 Chip Multiprocessors (I) Chip Multiprocessors (ACS MPhil) Robert Mullins

ECE 485/585 Microprocessor System Design

Chapter-4 Multiprocessors and Thread-Level Parallelism

Processor Architecture

CMSC 411 Computer Systems Architecture Lecture 21 Multiprocessors 3

Aleksandar Milenkovic, Electrical and Computer Engineering University of Alabama in Huntsville

CSE502: Computer Architecture CSE 502: Computer Architecture

CMSC 611: Advanced Computer Architecture

Parallel Computers. CPE 631 Session 20: Multiprocessors. Flynn s Tahonomy (1972) Why Multiprocessors?

1. Memory technology & Hierarchy

Lecture 24: Virtual Memory, Multiprocessors

Snooping-Based Cache Coherence

COSC4201 Multiprocessors

Advanced OpenMP. Lecture 3: Cache Coherency

Foundations of Computer Systems

CMSC Computer Architecture Lecture 15: Memory Consistency and Synchronization. Prof. Yanjing Li University of Chicago

ECSE 425 Lecture 30: Directory Coherence

5008: Computer Architecture

Multiple Issue and Static Scheduling. Multiple Issue. MSc Informatics Eng. Beyond Instruction-Level Parallelism

Lecture 25: Multiprocessors. Today s topics: Snooping-based cache coherence protocol Directory-based cache coherence protocol Synchronization

Module 9: Addendum to Module 6: Shared Memory Multiprocessors Lecture 17: Multiprocessor Organizations and Cache Coherence. The Lecture Contains:

Incoherent each cache copy behaves as an individual copy, instead of as the same memory location.

Cache Coherence. Introduction to High Performance Computing Systems (CS1645) Esteban Meneses. Spring, 2014

Other consistency models

Lect. 6: Directory Coherence Protocol

ESE 545 Computer Architecture Symmetric Multiprocessors and Snoopy Cache Coherence Protocols CA SMP and cache coherence

Lecture 18: Coherence Protocols. Topics: coherence protocols for symmetric and distributed shared-memory multiprocessors (Sections

Today s Outline: Shared Memory Review. Shared Memory & Concurrency. Concurrency v. Parallelism. Thread-Level Parallelism. CS758: Multicore Programming

Computer Architecture and Engineering CS152 Quiz #5 April 27th, 2016 Professor George Michelogiannakis Name: <ANSWER KEY>

Memory Hierarchy in a Multiprocessor

Multiprocessor Cache Coherency. What is Cache Coherence?

Relaxed Memory Consistency

Chap. 4 Multiprocessors and Thread-Level Parallelism

Module 9: "Introduction to Shared Memory Multiprocessors" Lecture 16: "Multiprocessor Organizations and Cache Coherence" Shared Memory Multiprocessors

Multicore Workshop. Cache Coherency. Mark Bull David Henty. EPCC, University of Edinburgh

Multiprocessors and Locking

Computer Architecture Memory hierarchies and caches

CS 152 Computer Architecture and Engineering

EE382 Processor Design. Processor Issues for MP

Lecture 13. Shared memory: Architecture and programming

EITF20: Computer Architecture Part 5.2.1: IO and MultiProcessor

Shared Memory Multiprocessors

PCS - Part Two: Multiprocessor Architectures

Multiprocessors II: CC-NUMA DSM. CC-NUMA for Large Systems

Lecture 11: Cache Coherence: Part II. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

Computer Architecture Lecture 10: Thread Level Parallelism II (Chapter 5) Chih Wei Liu 劉志尉 National Chiao Tung University

Chapter 9 Multiprocessors

Lecture 33: Multiprocessors Synchronization and Consistency Professor Randy H. Katz Computer Science 252 Spring 1996

Cache Coherence and Atomic Operations in Hardware

Lecture 25: Multiprocessors

Scalable Cache Coherence

COSC 6385 Computer Architecture - Thread Level Parallelism (I)

Suggested Readings! What makes a memory system coherent?! Lecture 27" Cache Coherency! ! Readings! ! Program order!! Sequential writes!! Causality!

Performance study example ( 5.3) Performance study example

Cache Coherence. (Architectural Supports for Efficient Shared Memory) Mainak Chaudhuri

Transcription:

Jens Teubner Computer Architecture Summer 2016 1 Computer Architecture Jens Teubner, TU Dortmund jens.teubner@cs.tu-dortmund.de Summer 2016

Jens Teubner Computer Architecture Summer 2016 83 Part III Multi-Core Systems

Building a Shared-Memory Multiprocessor Jens Teubner Computer Architecture Summer 2016 84 What the programmer likes to think of CPU core CPU core CPU core CPU core shared main-memory Scalability? Moore s Law?

Centralized Shared-Memory Multiprocessor Jens Teubner Computer Architecture Summer 2016 85 Caches help mitigate the bandwidth bottleneck(s). CPU core CPU core CPU core CPU core private cache private cache private cache private cache shared cache shared main-memory A shared bus connects CPU cores and memory. the shared bus may or may not be shared physically. The Intel Core architecture, e.g., implemented this design.

Jens Teubner Computer Architecture Summer 2016 86 Centralized Shared-Memory Multiprocessor The shared bus design with caches makes sense: + symmetric design; uniform access time for every memory item from every processor + private data gets cached locally behavior identical to that of a uniprocessor? shared data will be replicated to private caches Okay for parallel reads. But what about writes to the replicated data? In fact, we ll want to use memory as a mechanism to communicate between processors. The approach does have limitations, too: For large core counts, shared bus may still be a (bandwidth) bottleneck.

Caches and Shared Memory Jens Teubner Computer Architecture Summer 2016 87 Caching/replicating shared data can cause problems: read x (4) x := 42 (42) CPU CPU read x (4) read x (4) cache x = 42 4 cache x = 4 shared main memoryx x = 424 Challenges: Need well-defined semantics for such scenarios. Must efficiently implement that semantics.

5 We also demand that a read by processor P will return P s most recent write, provided that no other processor has written to the same location meanwhile. Also, every write must be visible by other processors after some time. Jens Teubner Computer Architecture Summer 2016 88 Cache Coherence The desired property (semantics) is cache coherence. Most importantly: 5 Writes to the same location are serialized; two writes to the same location (by any two processors) are seen in the same order by all processors. Note: We did not specify which order will be seen by the processors. Why?

Jens Teubner Computer Architecture Summer 2016 89 Cache Coherence Protocol Multiprocessor (or multicore) systems maintain coherence through a cache coherence protocol. Idea: Know which cache/memory holds the current value of the item. Other replicas might be stale. Two alternatives: 1 Snooping-Based Coherence All processors communicate to agree on item states. 2 Directory-Based Coherence A centralized directory holds information about state/whereabouts of data items.

Snooping-Based Cache Coherence Jens Teubner Computer Architecture Summer 2016 90 Rationale: All processors have access to a shared bus. Can snoop on the bus to track other processors activities. Use to track the sharing state of each cached item: Meta data for each cache block: (sharing) state block identification (tag) (sharing) state tag data Ignoring Multiprocessors for a moment, which state information might make sense to keep?

6 The protocol is thus also called write broadcast protocol. Jens Teubner Computer Architecture Summer 2016 91 Strategy 1: Write Update Protocol Idea: On every write, propagate the write to every copy. Use bus to broadcast writes. 6 Pros/Cons of this strategy?

Strategy 2: Write Invalidate Protocol Idea: Before writing an item, invalidate all other copies. Activity Bus Cache A Cache B Memory x = 4 A reads x cache miss for x x = 4 x = 4 B reads x cache miss for x x = 4 x = 4 x = 4 A reads x (cache hit) x = 4 x = 4 x = 4 B writes x invalidate x /////// x = 4 x = 42 x = 4 7 A reads x cache miss for x x = 42 x = 42 x = 42 Caches will re-fetch invalidated items automatically. Since the bus is shared, other caches may answer cache miss messages ( necessary for write-back caches). 7 With write-through caches, memory will be updated immediately. Jens Teubner Computer Architecture Summer 2016 92

Write Invalidate Realization Jens Teubner Computer Architecture Summer 2016 93 Realization: To invalidate, broadcast address on bus. All processors continuously snoop on bus: invalidate message for address held in own cache Invalidate own copy miss message for address held in own cache Reply with own copy (for write-back caches) Memory will see this and abort its own read What if two processors try to write at the same time?

Jens Teubner Computer Architecture Summer 2016 94 Write Invalidate Tracking Sharing States Through snooping, can monitor all bus activities by all processors. Track sharing state. Idea: Sending an invalidate will make local copy the only one valid. Mark local cache line as modified ( exclusive). If a local cache line is already modified, writes need not be announced on the bus (no invalidate message). Upon read request by other processor: If local cache line has state modified, answer the request by sending local version. Change local cache state to shared.

Write Invalidate State Machine Jens Teubner Computer Architecture Summer 2016 95 Local caches track sharing states using a state machine. CPU events CPU write miss; put write miss on bus modified dirty invalid CPU read miss; write back data put read miss on bus read miss; write back data CPU read miss; put read miss on bus uniprocessor track dirty shared clean CPU write miss; put write miss on bus

Write Invalidate State Machine Jens Teubner Computer Architecture Summer 2016 95 Local caches track sharing states using a state machine. CPU events CPU write miss; put write miss on bus modified dirty invalid CPU read miss; write back data put read miss on bus read miss; write back data CPU read miss; put read miss on bus CPU write hit; put invalidate on bus CPU write miss; put write miss on bus multiprocessor uniprocessor also track send dirty invalidate shared clean

Write Invalidate State Machine Jens Teubner Computer Architecture Summer 2016 95 Local caches track sharing states using a state machine. bus events CPU events CPU write miss; put write miss on bus modified dirty write miss; write back block invalid CPU read miss; put read miss on bus write miss invalidate CPU read miss; write back data put read miss on bus read miss; write back data CPU write hit; put invalidate on bus CPU write miss; put write miss on bus multiprocessor uniprocessor (cont.) react track to bus dirty events shared clean

Jens Teubner Computer Architecture Summer 2016 96 Write Invalidate Notes Notes: Because of the three states modified, shared, and invalid, the protocol on the previous slide is also called MSI protocol. The Write Invalidate protocol ensures that any valid cache block is either in the shared state in one or more caches or in the modified state in exactly one cache. (Any transition to the modified state invalidates all other copies of the block; whenever another cache fetches a copy of the block, the modified state is left.) The MSI protocol also ensures that every shared item has also been written back to memory.

Jens Teubner Computer Architecture Summer 2016 97 MSI Protocol Extensions Actual systems often use extensions to the MSI protocol, e.g., MESI ( E for exclusive) Distinguish between exclusive (but clean) and modified (which implies that the copy is exclusive). Optimizes the (common) case when an item is first read ( exclusive) then modified ( modified). MESIF ( F for forward) In M(E)SI, if shared items are served by caches (not only by memory), all caches might answer miss requests. MESIF extends the protocol, so at most one shared copy of an item is marked as forward. Only this cache will respond to misses on the bus. Intel i7 employs the MESIF protocol.

Jens Teubner Computer Architecture Summer 2016 98 MSI Protocol Extensions MOESI ( O for owned) owned marks an item that might be outdated in memory; the owner cache is responsible for the item. The owner must respond to data requests (since main memory might be outdated). MOESI allows moving around dirty data between caches. The AMD Opteron uses the MOESI protocol. MOESI avoids the need to write every shared cache block back to memory ( ).

Jens Teubner Computer Architecture Summer 2016 99 Limitations of a Shared Bus Limitations of a shared bus: Large core counts high bandwidth. Shared buses cannot satisfy bandwidth demands of modern multiprocessor systems. Therefore: Distribute memory Communicate through interconnection network Consequence: Non-uniform memory access (NUMA) characteristics

Jens Teubner Computer Architecture Summer 2016 100 Bandwidth Demand E.g., Intel Xeon E7-8880 v3: 2.3 GHz clock rate 18 cores per chip (36 threads) Up to 8 processors per system Back-of-the-envelope calculation: 1 byte per cycle per core 331 GB/s Data-intensive applications might demand much more! Shared memory bus? Modern bus standards can deliver at most a few ten GB/s. Switching very high bandwidths is a challenge.

8 The AMD counterpart is HyperTransport. Jens Teubner Computer Architecture Summer 2016 101 Example: 8-Way Intel Nehalem-EX MEM MEM MEM MEM I/O CPU CPU CPU CPU I/O I/O CPU CPU CPU CPU I/O MEM MEM MEM MEM Interconnect: Intel Quick Path Interconnect (QPI) 8 Memory may be local, one hop away, or two hops away. Non-uniform memory access (NUMA)

Distributed Memory and Snooping Jens Teubner Computer Architecture Summer 2016 102 Idea: Extend snooping to distributed memory. Broadcast coherence traffic, send data point-to-point. Problem solved?

Snooping-Based Cache Coherency: Scalability FFT LU Miss rate 8% 7% 6% 5% 4% 3% Miss rate 2% 1% 0% 1 2 4 8 16 Processor count Miss rate 2% 1% 0% 1 2 4 8 16 Processor count Barnes 1% 0% 1 2 4 8 16 Miss rate 20% 18% 16% 14% 12% 10% 8% 6% 4% 2% Ocean Example: Scientific Applications Hennessy & Patterson, Sect. I.5 Processor count 0% 1 2 4 8 16 Processor count Coherence miss rate Capacity miss rate AMD Opteron is a system that still uses the approach. Jens Teubner Computer Architecture Summer 2016 103

Jens Teubner Computer Architecture Summer 2016 104 Directory-Based Cache Coherence To avoid all-broadcast coherence protocol: Use a directory to keep track of which item is replicated where. Direct coherence messages only to those nodes that actually need them. Directory: Either keep a global directory ( scalability?). Or define a home node for each memory address. Home node holds directory for that item. Typically: distribute directory along with memory. Protocol now involves directory/-ies (at item home node(s)), individual caches (local to processors). Parties communicate point-to-point (no broadcasts).

use of a broadcast to all the nodes. Before we see the protocol state diagrams, it is useful to examine a catalog of the message types that may be sent between the processors and the directories for the purpose of handling misses and maintaining coherence. Figure 5.21 shows the types of messages sent among nodes. The local node is the node where a request originates. The home node is the node where the memory location and the Directory-Based Cache Coherence Messages sent by individual nodes: Message type Source Destination Message contents Function of this message Read miss Local cache Home directory P, A Node P has a read miss at address A; request data and make P a read sharer. Write miss Local cache Home directory P, A Node P has a write miss at address A; request data and make P the exclusive owner. Invalidate Local cache Home directory A Request to send invalidates to all remote caches that are caching the block at address A. Invalidate Home directory Remote cache A Invalidate a shared copy of data at address A. Fetch Home directory Remote cache A Fetch the block at address A and send it to its home directory; change the state of A in the remote cache to shared. Fetch/invalidate Home directory Remote cache A Fetch the block at address A and send it to its home directory; invalidate the block in the cache. Data value reply Home directory Local cache D Return a data value from the home memory. Data write-back Remote cache Home directory A, D Write-back a data value for address A. Figure 5.21 The possible messages sent among nodes to maintain coherence, along with the source and destination node, the contents (where P = requesting node number, A = requested address, and D = data contents), and Hennessy the function of & the Patterson, message. The first Computer three messages Architecture, are requests sent 5th by the edition, local node page to the home. 381. The fourth through sixth messages are messages sent to a remote node by the home when the home needs the data to Jens Teubner Computer Architecture Summer 2016 105

Directory-Based Coherence State Machine Jens Teubner Computer Architecture Summer 2016 106 Individual caches use a state machine similar to the one on slide 95. messages from home directory CPU events modified invalidate; write miss; send write miss msg write back block invalid read miss; send read miss msg invalidate read miss; write back data send read miss msg fetch; write back data write hit; send invalidate msg write miss; send write miss msg shared

Directory-Based Coherence State Machine Jens Teubner Computer Architecture Summer 2016 107 The directory has its own state machine. messages from home directory CPU events write miss; data value reply; Sharers = {P} write back; Sharers = {} uncached read miss; data value reply Sharers = {P} read miss; fetch; data value reply; Sharers = {P} exclusive write miss; invalidate; Sharers = {P}; data value reply shared

9 In general, this will yield incorrect counter values. Jens Teubner Computer Architecture Summer 2016 108 Cache Coherence Cost Experiment: Several threads randomly increment elements of an integer array; Zipfian probability distribution, no synchronization 9. nano-seconds / iteration 100 same chip 80 60 40 20 0 6.6 13.2 same core same chip 19.6 80.7 off-chip 1 2 2 2 1 2 3 4 5 8 Intel Nehalem EX; 1.87 GHz; 2 CPUs, 8 cores/cpu. threads

Jens Teubner Computer Architecture Summer 2016 109 Cache Coherence Cost Two types of coherence misses: true sharing miss Data shared among processors. Often-used mechanism to communicate between threads. These misses are unavoidable. false sharing miss Processors use different data items, but the items reside in the same cache line. Items get invalidated/migrated, even though no data is actually shared. How can false sharing misses be avoided?

Synchronization Producer/Consumer Jens Teubner Computer Architecture Summer 2016 110 Shared data is a convenient mechanism to communicate between threads. Producer/consumer protocol based on (only) shared memory?

Shared Memory Consistency Jens Teubner Computer Architecture Summer 2016 111 Programs like the producer/consumer scenario make assumptions about the observed order of memory operations. Another example: Thread T 1 : /* prepare data */ 1 data 0x4711; /* tell T 2 we re done */ 2 done 1; Thread T 2 : /* wait for T 1 */ 1 while done = 0 do /* do nothing */ /* use data */ 2 print (data); Cache coherence will not ensure correct behavior here! Why?

Shared Memory Consistency Jens Teubner Computer Architecture Summer 2016 112 Need consistency model to reason about program behavior. Most straightforward model: sequential consistency A multiprocessor system is sequentially consistent if the result of any execution is the same as if the operations of all the processors were executed in some sequential order, and the operations of each individual processor appear in this sequence in the order specified by its program.

Jens Teubner Computer Architecture Summer 2016 113 Sequential Consistency Example: Assume two programs: 1. Program A: A 1, A 2, A 3 2. Program B: B 1, B 2, B 3 Sequential consistency allows program behavior as if any of the sequences A 1, A 2, B 1, A 3, B 2, B 3 A 1, B 1, B 2, A 2, A 3, B 3 B 1, B 2, A 1, A 2, A 3, B 3 were executed. However, program runs equivalent to A 1, B 2, A 2, B 1, A 3, B 3 or B 1, B 2, A 3, A 2, B 3, A 1 are not allowed.

Jens Teubner Computer Architecture Summer 2016 114 Sequential Consistency Where s the problem? Sequential consistency: Result as if all memory accesses by each processors were kept in order. But: Write buffers make write operations asynchronous. Individual memory operations might experience different delays in the interconnect network. Dynamic scheduling and speculation also alter the order of execution. Solution? Enforce in-order execution for memory operations? E.g., wait for acknowledgements from all caches for invalidate messages.

Jens Teubner Computer Architecture Summer 2016 115 Relaxed Consistency Models Relaxations of sequential consistency sacrifice consistency guarantees to achieve better performance. Idea: Relax the strict handling of W R (write followed by a read) program order, etc. Ask programmers/compilers to explicitly mark situations where consistency must be maintained. E.g., Relax W R ( total store ordering or processor consistency ): Many programs will depend on the order among writes. These will run just fine. Can still hide write latency. example on slide 111?

Relaxed Consistency Models Jens Teubner Computer Architecture Summer 2016 116 Relax W R and W W ( partial store order ) Writes to different locations may appear in different order. Allows combined writes. weak ordering No order guarantees. Goes well with dynamic scheduling. Programs must ask for ordering explicitly.

Jens Teubner Computer Architecture Summer 2016 117 Synchronization in Relaxed Consistency Models With relaxed consistency, applications may have to provide explicit synchronization instructions. E.g., fence operations Wait until all outstanding memory operations have been completed. E.g., LFENCE, SFENCE, MFENCE on Intel x86 locked instructions Intel allows some instructions to be prefixed with LOCK. This allows for complex instructions to appear atomic. Locked instructions are not re-ordered.

Jens Teubner Computer Architecture Summer 2016 118 Synchronization Mutual Exclusion Two threads want to use a critical region. Synchronization based on (only) shared memory?

Jens Teubner Computer Architecture Summer 2016 119 Need for Atomic Operations Problem: A lock implementation for n threads that only uses shared memory must write to O(n) distinct memory locations. With unclear memory consistency, even that might not be sufficient. Therefore: E.g., Hardware support for atomic read/modify operations. test-and-set (v, a): Set variable v to a and return the value that v had before the assignment. fetch-and-increment (v): Return the value of v and atomially increment it.

Atomic Operations and Instruction Set Jens Teubner Computer Architecture Summer 2016 120 Instruction set for atomic operations: Either explicit test and set, fetch and increment, etc. instructions. E.g., XADD (Exchange and Add) on Intel x86. Or pairs of instructions that operate together load linked / load locked and store conditional E.g., MIPS atomic exchange ( Hennessy & Patterson) try: MOV R3, R4 ; mov exchange value LL R2, O(R1) ; load linked SC R3, O(R1) ; store conditional BEQZ R3, try ; branch if store fails MOV R4, R2 ; put load value in R4

Implementing Atomic Operations Jens Teubner Computer Architecture Summer 2016 121 Another example: fetch and increment try: LL R2, O(R1) ; load linked DADDUI R3, R2, #1 ; increment SC R3, O(R1) ; store conditional BEQZ R3, try ; branch if store fails Implementation: On LL, remember address in link register. Clear link register whenever an interrupt occurs or an invalidate is received for the address in the link register. On SC, compare address to link register.

Jens Teubner Computer Architecture Summer 2016 122 Using Atomic Operations Spinlock Implementation of a spinlock? See the (excellent) article of Tom Anderson ( The Performance of Spin Lock Alternatives for Shared-Memory Multiprocessors. IEEE Trans. on Parallel and Distributed Systems, vol. 1(1), 1990).

Locking Strategies Jens Teubner Computer Architecture Summer 2016 123 There are two strategies to implement locking: Spinning ( busy waiting ; can be done in user space) Waiting thread repeatedly polls lock until it becomes free. Cost: two cache miss penalties (if implemented well) 150 nsec Thread burns CPU cycles while spinning. Blocking (operating system service) De-schedule waiting thread until lock becomes free. Cost: two context switches (one to sleep, one to wake up) 12 20 µsec

Thread Synchronization Jens Teubner Computer Architecture Summer 2016 124 Blocking: thread working lock held thread 1 thread 2 de-schedule wake-up time Spinning: thread working lock held thread 1 thread 2 thread spinning short delay time

Experiments: Locking Performance Sun Niagara II (64 hardware contexts): Throughput (ktps) 150 120 90 60 30 0 100% load Ideal Blocking Spinning 0 32 64 96 128 160 192 # Threads Figure 1. Weaknesses in state-of-the-art synchronization primi- Source: Johnson et al. Decoupling Contention Management from Scheduling. ASPLOS 2010. Jens Teubner Computer Architecture Summer 2016 125 co (f

Jens Teubner Computer Architecture Summer 2016 126 Spinning Under High Load Under high load, spinning can cause problems: thread 1 time thread 2 More threads than hardware contexts. Operating system preempts running task. Working and spinning threads all appear busy to the OS. Working thread likely had longest time share already gets de-scheduled by OS. Long delay before working thread gets re-scheduled. By the time working thread gets re-scheduled (and can now make progress), waiting thread likely gets de-scheduled, too.

Spinning Machine Util (%) Work Contention Prio-Invert 100 80 60 40 20 0 15 31 47 63 71 95 127 159 191 Client Threads Figure 3. Spinning: priority inversion Source: Johnson et al. Decoupling Contention Management from Scheduling. ASPLOS 2010. Jens Teubner Computer Architecture Summer 2016 127