Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

Similar documents
Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Snooping-Based Cache Coherence

Lecture 11: Snooping Cache Coherence: Part II. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Lecture 10: Cache Coherence. Parallel Computer Architecture and Programming CMU / 清华 大学, Summer 2017

Lecture 11: Cache Coherence: Part II. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

Computer Architecture

Shared Memory Multiprocessors

Processor Architecture

Advanced OpenMP. Lecture 3: Cache Coherency

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy Prof. Sherief Reda School of Engineering Brown University

Cache Coherence. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy

Cache Coherence in Bus-Based Shared Memory Multiprocessors

CS252 Spring 2017 Graduate Computer Architecture. Lecture 12: Cache Coherence

Cache Coherence. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

4 Chip Multiprocessors (I) Chip Multiprocessors (ACS MPhil) Robert Mullins

Cache Coherence. (Architectural Supports for Efficient Shared Memory) Mainak Chaudhuri

ESE 545 Computer Architecture Symmetric Multiprocessors and Snoopy Cache Coherence Protocols CA SMP and cache coherence

Lecture 3: Snooping Protocols. Topics: snooping-based cache coherence implementations

Memory Hierarchy in a Multiprocessor

Overview: Shared Memory Hardware. Shared Address Space Systems. Shared Address Space and Shared Memory Computers. Shared Memory Hardware

Overview: Shared Memory Hardware

Lecture 24: Multiprocessing Computer Architecture and Systems Programming ( )

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

Multicore Workshop. Cache Coherency. Mark Bull David Henty. EPCC, University of Edinburgh

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

CS/ECE 757: Advanced Computer Architecture II (Parallel Computer Architecture) Symmetric Multiprocessors Part 1 (Chapter 5)

A Basic Snooping-Based Multi-Processor Implementation

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013

Computer Architecture

COEN-4730 Computer Architecture Lecture 08 Thread Level Parallelism and Coherence

Module 10: "Design of Shared Memory Multiprocessors" Lecture 20: "Performance of Coherence Protocols" MOESI protocol.

EC 513 Computer Architecture

Page 1. SMP Review. Multiprocessors. Bus Based Coherence. Bus Based Coherence. Characteristics. Cache coherence. Cache coherence

Lecture 20: Multi-Cache Designs. Spring 2018 Jason Tang

Parallel Computer Architecture Lecture 5: Cache Coherence. Chris Craik (TA) Carnegie Mellon University

Lecture 24: Board Notes: Cache Coherency

ECE 485/585 Microprocessor System Design

Shared Memory Architectures. Shared Memory Multiprocessors. Caches and Cache Coherence. Cache Memories. Cache Memories Write Operation

Multiprocessors & Thread Level Parallelism

CSE502: Computer Architecture CSE 502: Computer Architecture

Bus-Based Coherent Multiprocessors

Lecture-22 (Cache Coherence Protocols) CS422-Spring

MULTIPROCESSORS AND THREAD-LEVEL. B649 Parallel Architectures and Programming

Fall 2012 EE 6633: Architecture of Parallel Computers Lecture 4: Shared Address Multiprocessors Acknowledgement: Dave Patterson, UC Berkeley

MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming

Module 9: Addendum to Module 6: Shared Memory Multiprocessors Lecture 17: Multiprocessor Organizations and Cache Coherence. The Lecture Contains:

Foundations of Computer Systems

Chapter 5. Multiprocessors and Thread-Level Parallelism

CSE 5306 Distributed Systems

12 Cache-Organization 1

Introduction to Multiprocessors (Part II) Cristina Silvano Politecnico di Milano

Computer Systems Architecture

Lecture 7: PCM Wrap-Up, Cache coherence. Topics: handling PCM errors and writes, cache coherence intro

Cache Coherence. Bryan Mills, PhD. Slides provided by Rami Melhem

A Basic Snooping-Based Multi-processor

Shared Memory Multiprocessors. Symmetric Shared Memory Architecture (SMP) Cache Coherence. Cache Coherence Mechanism. Interconnection Network

Module 9: "Introduction to Shared Memory Multiprocessors" Lecture 16: "Multiprocessor Organizations and Cache Coherence" Shared Memory Multiprocessors

Computer Architecture and Engineering CS152 Quiz #5 April 27th, 2016 Professor George Michelogiannakis Name: <ANSWER KEY>

Module 5: Performance Issues in Shared Memory and Introduction to Coherence Lecture 10: Introduction to Coherence. The Lecture Contains:

Cray XE6 Performance Workshop

Switch Gear to Memory Consistency

Chapter 5. Multiprocessors and Thread-Level Parallelism

Suggested Readings! What makes a memory system coherent?! Lecture 27" Cache Coherency! ! Readings! ! Program order!! Sequential writes!! Causality!

Lecture 1: Introduction

A More Sophisticated Snooping-Based Multi-Processor

Shared Memory Architectures. Approaches to Building Parallel Machines

Lecture 18: Coherence Protocols. Topics: coherence protocols for symmetric and distributed shared-memory multiprocessors (Sections

Handout 3 Multiprocessor and thread level parallelism

Lecture 7: PCM, Cache coherence. Topics: handling PCM errors and writes, cache coherence intro

A Basic Snooping-Based Multi-Processor Implementation

Aleksandar Milenkovich 1

An#Aside# Course#thus#far:# We#have#focused#on#parallelism:#how#to#divide#up#work#into# parallel#tasks#so#that#they#can#be#done#faster.

Lecture 7: Implementing Cache Coherence. Topics: implementation details

Lecture 25: Multiprocessors. Today s topics: Snooping-based cache coherence protocol Directory-based cache coherence protocol Synchronization

Parallel Computers. CPE 631 Session 20: Multiprocessors. Flynn s Tahonomy (1972) Why Multiprocessors?

Multiprocessors. Flynn Taxonomy. Classifying Multiprocessors. why would you want a multiprocessor? more is better? Cache Cache Cache.

Goldibear and the 3 Locks. Programming With Locks Is Tricky. More Lock Madness. And To Make It Worse. Transactional Memory: The Big Idea

Incoherent each cache copy behaves as an individual copy, instead of as the same memory location.

Lecture 13: Memory Consistency. + Course-So-Far Review. Parallel Computer Architecture and Programming CMU /15-618, Spring 2014

Scalable Cache Coherence

Multiprocessor Cache Coherency. What is Cache Coherence?

Multiprocessors 1. Outline

Aleksandar Milenkovic, Electrical and Computer Engineering University of Alabama in Huntsville

Shared Symmetric Memory Systems

Cache Coherence and Atomic Operations in Hardware

The Cache Write Problem

NOW Handout Page 1. Recap. Protocol Design Space of Snooping Cache Coherent Multiprocessors. Sequential Consistency.

CMSC Computer Architecture Lecture 15: Memory Consistency and Synchronization. Prof. Yanjing Li University of Chicago

Shared Memory Multiprocessors

10 Parallel Organizations: Multiprocessor / Multicore / Multicomputer Systems

Lecture 25: Multiprocessors

Limitations of parallel processing

Advanced Parallel Programming I

Modern CPU Architectures

CSC 631: High-Performance Computer Architecture

Introducing Multi-core Computing / Hyperthreading

COMP Parallel Computing. CC-NUMA (1) CC-NUMA implementation

CMSC 411 Computer Systems Architecture Lecture 21 Multiprocessors 3

Transcription:

Lecture 10: Cache Coherence: Part I Parallel Computer Architecture and Programming

Cache design review Let s say your code executes int x = 1; (Assume for simplicity x corresponds to the address 0x12345604 in memory... it s not stored in a register) One Cache Line: Line state Tag 1 0 0 0... Dirty bit Data (64 bytes on Intel Core i7) Write back vs. write-through cache Write allocate vs. write-no-allocate cache

Review: write miss behavior of write-allocate, write-back cache (uniprocessor case) Example: code executes int x = 1; 1. Processor performs write to address in line that is not resident in cache 2. Cache loads line from memory 3. One word in cache is updated 4. Cache line is marked as dirty Line state Tag Data (64 bytes on Intel Core i7) Dirty bit

A shared memory multi-processor Processors read and write to shared variables - More precisely: processors issue load and store instructions A reasonable expectation of memory is: - Reading a value at address X should return the last value written to address X by any processor Processor Processor Processor Processor Interconnect Memory I/O (A nice and simple view of four processors and their shared address space)

The cache coherence problem Modern processors replicate contents of memory in local caches Result of writes: processors can have different values for the same memory location Processor Processor Processor Processor Cache Cache Cache Cache Interconnect Memory I/O int foo; (stored at address X) Action P1 $ P2 $ P3 $ P4 $ mem[x] P1 load X 0 miss 0 P2 load X 0 0 miss 0 0 Chart shows the value of foo (a program variable stored at address X) in main memory, and also in each processor s cache ** ** Assumes write-back cache behavior P1 store X 1 0 0 P3 load X 1 0 0 miss P3 store X 1 0 2 0 P2 load X 1 0 hit 2 0 P1 load Y (say this load causes eviction of X) 0 2 0 1

The cache coherence problem Processor Processor Processor Processor Cache Cache Cache Cache Interconnect How is this problem different from the mutual exclusion problem? Can you fix the problem by adding Memory int foo; (stored at address X) I/O locks to your program? Action P1 $ P2 $ P3 $ P4 $ mem[x] 0 P1 load X 0 miss 0 P2 load X 0 0 miss 0 Chart shows the value of foo (a program variable stored at address X) in main memory, and also in each processor s cache ** ** Assumes write-back cache behavior P1 store X 1 0 0 P3 load X 1 0 0 miss P3 store X 1 0 2 0 P2 load X 1 0 hit 2 0 P1 load Y (say this load causes eviction of X) 0 2 0 1

The cache coherence problem Intuitive behavior for memory system: reading value at address X should return the last value written at address X by any processor. Coherence problem exists because there is both global storage (main memory) and per-processor local storage (processor caches) implementing the abstraction of a single shared address space.

Cache hierarchy of Intel Core i7 CPU 64 byte cache line size Shared L3 Cache (One bank per core) L3: (per chip) 8 MB, inclusive 16-way set associative 32B / clock per bank 26-31 cycle latency Ring Interconnect L2 Cache L2 Cache L2 Cache L2 Cache L2: (private per core) 256 KB 8-way set associative, write back 32B / clock, 12 cycle latency Up to 16 outstanding misses L1 Data Cache Core L1 Data Cache Core L1 Data Cache Core L1 Data Cache Core L1: (private per core) 32 KB 8-way set associative, write back 2 x 16B loads + 1 x 16B store per clock 4-6 cycle latency Up to 10 outstanding misses

Intuitive expectation of shared memory Intuitive behavior for memory system: reading value at address should return the last value written at the address by any processor. Uniprocessor, providing this behavior is fairly simple, since writes typically come from one client: the processor - Exception: device I/O via direct memory access (DMA)

Coherence is an issue in a single CPU system Consider I/O device performing DMA data transfer Processor Cache Memory Interconnect Message Buffer Network Card Case 1: Processor writes to buffer in main memory Processor tells network card to async send buffer Problem: network card many transfer stale data if processor s writes not flushed to memory Case 2: Network card receives message Network card copies message in buffer in main memory using DMA transfer Notifies CPU msg received, buffer ready to read Problem: CPU may read stale data if addresses updated by network card happen to be in cache. Common solutions: - CPU writes to shared buffers using uncached stores (e.g., driver code) - OS support: - Mark VM pages containing shared buffers as not-cachable - Explicitly flush pages from cache when I/O completes In practice, DMA transfers are infrequent compared to CPU loads and stores (so these heavyweight solutions are acceptable)

Problems with the intuition Intuitive expectation: reading value at address X should return the last value written to address X by any processor. What does last mean? - What if two processors write at the same time? - What if a write by P1 is followed by a read from P2 so close in time, it s impossible to communicate occurrence to other processors? In a sequential program, last is determined by program order (not time) - Holds true within one thread of a parallel program - But need to come up with a meaningful way to describe order across threads

Definition: coherence A memory system is coherent if: The results of a parallel program s execution are such that for Chronology of operations on address X P0 write: 5 P1 read (5) each memory location, there is a hypothetical serial order of all program operations (executed by all processors) to the location that is consistent with the results of execution, and: P2 read (5) 1. Memory operations issued by any one processor occur in the order issued by the processor 2. The value returned by a read is the value written by the last write to the location in the serial order P0 read (5) P1 write: 25 P0 read (5)

Definition: coherence (said differently) A memory system is coherent if: 1. A read by processor P to address X that follows a write by P to address X, should return the value of the write by P (assuming no other processor wrote to X in between) 2. A read by a processor to address X that follows a write by another processor to X returns the written value... if the read and write are sufficiently separated in time (assuming no other write to X occurs in between) 3. Writes to the same location are serialized: two writes to the same location by any two processors are seen in the same order by all processors. (Example: if values 1 and then 2 are written to address X, no processor observes 2 before 1) Condition 1: program order (as expected of a uniprocessor system) Condition 2: write propagation : The news of the write has to eventually get to the other processors. Note that precisely when it is propagated is not specified in the definition of coherence. Condition 3: write serialization

Write serialization Writes to the same location are serialized: two writes to the same location by any two processors are seen in the same order by all processors. (Example: if values 1 and then 2 are written to address X, no processor observes 2 before 1) Example: P1 writes value a to X. Then P2 writes value b to X. Consider situation where processors observe different order of writes: Order observed by P1 x a Order observed by P2 x b...... x b x a In terms of the first coherence definition: there is no global ordering of loads and stores to X that is in agreement with results of this parallel program. (you cannot put the two memory operations on a single timeline and have both processor s observations agree with the timeline)

Coherence vs. consistency Coherence defines behavior of reads and writes to the same memory location Memory consistency (a topic of a future lecture) defines the behavior of reads and writes to different locations - Consistency deals with the WHEN of write propagation - Coherence only guarantees that it does eventually propagate For the purposes of this lecture: - If processor writes to address X and then writes to address Y. Then any processor that sees result of write to Y, also observes result of write to X. - Why does coherence alone not guarantee this? (hint: first bullet point)

Implementing coherence Software-based solutions - OS uses page fault mechanism to propagate writes - Implementations provide memory coherence over clusters of workstations - We won t discuss these solutions Hardware-based solutions - Snooping based (today and next class) - Directory based (next, next class)

Shared caches: coherence made easy Obvious scalability problems (the point of a cache is to be local and fast) - Interference / contention But can have benefits: - Fine-grained sharing (overlapping working sets) - Actions by one processor might pre-fetch for another Processor Processor Processor Processor Cache Memory I/O

Snooping cache-coherence schemes Main idea: all coherence-related activity is broadcast to all processors in the system (actually, the processor s cache controllers) Cache controllers monitor ( they snoop ) memory operations, and react accordingly to maintain memory coherence Notice: now cache controller must respond to actions from both ends : Processor Processor... Processor 1. LD/ST requests from its processor 2. Coherence-related activity broadcast over-interconnect Cache Cache Interconnect Cache Memory

Very simple coherence implementation Let s assume: 1. Write-through caches 2. Granularity of coherence is cache block Processor P0... Processor P1 Upon write, cache controller broadcasts Cache Cache invalidation message Interconnect As a result, the next read from other processors will trigger cache miss (processor retrieves updated value from memory due to write-through policy) Memory Action Bus activity P0 $ P1 $ mem location X 0 P0 load X cache miss for X 0 0 P1 load X cache miss for X 0 0 0 P0 write 100 to X invalidation for X 100 100 P1 load X cache miss for X 100 100 100

Write-through invalidation: state diagram PrRd / -- PrWr / BusWr V (Valid) A / B: if action A is observed by cache controller, action B is taken Remote processor (coherence) initiated transaction Local processor initiated transaction Requirements of the interconnect: PrRd / BusRd BusWr/-- 1. All write transactions visible to all cache controllers 2. All write transactions visible to all cache controllers in the same order I (Invalid) PrWr/ BusWr ** Simplifying assumptions here: 1. Interconnect and memory transactions are atomic 2. Processor waits until previous memory operations is complete before issuing next memory operation 3. Invalidation applied immediately as part of receiving invalidation broadcast ** Assumes write no-allocate policy (for simplicity)

Write-through policy is inefficient Every write operation goes out to memory - Very high bandwidth requirements Write-back caches absorb most write traffic as cache hits - Significantly reduces bandwidth requirements - But now how do we ensure write propagation/serialization? - Require more sophisticated coherence protocols

Recall cache line state bits Line state Tag Data (64 bytes on Intel Core i7) Dirty bit

Cache coherence with write-back caches Processor P0... Processor P1 Chronology of operations on address X Write to X Load X P0 write X Cache Cache P1 read Interconnect Memory Dirty state of cache line now indicates exclusive ownership - Exclusive: cache is only cache with a valid copy of line (so it can safely be written to) - Owner: cache is responsible for supplying data upon request (otherwise a load from another processor will get stale data)

Invalidation-based write-back protocol A line in the exclusive state can be modified without notifying the other caches - No other caches have the line resident, so other processors cannot read these values [without generating a memory read transaction] Processor can only write to lines in the exclusive state - If processor performs a write to line that is not exclusive in cache, cache controller must first broadcast a read-exclusive transaction - Read-exclusive tells other caches about impending write ( you can t read any more, because I m going to write ) - Read-exclusive transaction is required even if line is valid (but not exclusive) in processor s local cache - Dirty state implies exclusive When cache controller snoops a read exclusive for a line it contains - Must invalidate the line in its cache

Basic MSI write-back invalidation protocol Key tasks of protocol - Obtaining exclusive access for a write - Locating most recent copy of data on cache miss Cache line states - Invalid (I) - Shared (S): line valid in one or more caches - Modified (M): line valid in exactly one cache (a.k.a. dirty or exclusive state) Processor events - PrRd (read) - PrWr (write) Bus transactions - BusRd: obtain copy of line with no intent to modify - BusRdX: obtain copy of line with intent to modify - BusWB: write line out to memory

MSI state transition diagram PrRd / -- PrWr / -- M (Modified) A / B: if action A is observed by cache controller, action B is taken Remote processor (coherence) initiated transaction Local processor initiated transaction flush = flush dirty line to memory PrWr / BusRdX BusRd / flush PrWr / BusRdX S (Shared) BusRdX / flush PrRd / BusRd PrRd / -- BusRdX / -- BusRd / -- I (Invalid) Alternative state names: - E (exclusive, read/write access) - S (potentially shared, read-only access) - I (invalid, no access)

Does MSI satisfy coherence? Write propagation - Via combination of invalidation on BusRdX, and flush from M-state on subsequent busrd/busrdx from another processors Write serialization - Writes that appear on interconnect are ordered by the order they appear on interconnect (BusRdX) - Reads that appear on interconnect are ordered by order they appear on interconnect (BusRd) - Writes that don t appear on the interconnect (PrWr to line already in M state): - Sequence of writes to line comes between two bus transactions for the line - All writes in sequence performed by same processor, P (that processor certainly observes them in correct sequential order) - All other processors observe notification of these writes only after a bus transaction for the line. So all the writes come before the transaction. - So all processors see writes in the same order.

MESI invalidation protocol MSI requires two bus transactions for the common case of reading data, then writing to it MESI, not Messi! - Transaction 1: BusRd to move from I to S state - Transaction 2: BusRdX to move from S to M state This inefficiency exists even if application has no sharing at all Solution: add additional state E ( exclusive clean ) - Line not modified, but only this cache has copy - Decouples exclusivity from line ownership (line not dirty, so copy in memory is valid copy of data) - Upgrade from E to M does not require a bus transaction

MESI state transition diagram PrRd / -- PrWr / -- M (Modified) PrWr / -- PrWr / BusRdX E (Exclusive) BusRd / flush PrWr / BusRdX PrRd / -- BusRd / -- BusRdX / flush S (Shared) PrRd / BusRd (no other cache asserts shared) PrRd / BusRd (another cache asserts shared) PrRd / -- BusRd / -- BusRdX / -- BusRdX / -- I (Invalid)

Lower-level choices Who should supply data on a cache miss when line is in the E or S state of another cache? - Can get data from memory or can get data from another cache - If source is another cache, which one should provide it? Cache-to-cache transfers add complexity, but commonly used today to reduce both latency of access and memory bandwidth requires

Increasing efficiency (and complexity) MESIF (5-stage invalidation-based protocol) - Like MESI, but one cache holds shared line in F state rather than S (F= forward ) - Cache with line in F state services miss - Simplifies decision of which cache should service miss (basic MESI: all caches respond) - Used by Intel processors MOESI (5-stage invalidation-based protocol) - In MESI protocol, transition from M to S requires flush to memory - Instead transition from M to O (O= owned, but not exclusive ) and do not flush to memory - Other processors maintain shared line in S state, one processor maintains line in O state - Data in memory is stale, so cache with line in O state must service cache misses - Used in AMD Opteron

Implications of implementing coherence Each cache must listen for and react to all coherence traffic broadcast on interconnect - Typical to duplicate cache tags so that tag lookup in response to coherence actions does not interfere with processor loads and stores (caches can check tags in response to processor and remote coherence message in parallel) Additional traffic on chip interconnect - Can be significant when scaling to higher core counts Most modern multi-core CPUs implement cache coherence To date, GPUs do not implement cache coherence - Thus far, overhead of coherence deemed not worth benefits for graphics and scientific computing applications (NVIDIA GPUs provide single shared L2 + atomic memory operations)

Implications to software developer What could go wrong with this code? // allocate per- thread variable for local accumulation int mycounter[num_threads]; Better: // allocate per thread variable for local accumulation struct PerThreadState { int mycounter; char padding[64 - sizeof(int)]; }; PerThreadState mycounter[num_threads];

False sharing Cache line Condition where two processors write to different addresses, but addresses map to the same cache line Cache line ping-pongs between caches of writing processors, generating significant amounts of communication due to the coherence protocol P1 P2 No inherent communication, this is entirely artifactual communication Can be a factor in when programming for cache coherent architectures

Summary The cache coherence problem exists because the abstraction of a single shared address space is not actually implemented by a single storage unit in a machine - Storage is distributed among main memory and local processor caches Main idea of snooping-based cache coherence: Whenever a cache operation occurs that could affect coherence, the cache controller broadcasts a notification to all other cache controllers. - Challenge for HW architects: minimizing overhead of coherence implementation - Challenge for SW developers: be wary of artifactual communication due to coherence protocol (e.g., false sharing)