NOW Handout Page 1. Recap. Protocol Design Space of Snooping Cache Coherent Multiprocessors. Sequential Consistency.

Similar documents
Recall: Sequential Consistency. CS 258 Parallel Computer Architecture Lecture 15. Sequential Consistency and Snoopy Protocols

ECE 669 Parallel Computer Architecture

Shared Memory Multiprocessors

Approaches to Building Parallel Machines. Shared Memory Architectures. Example Cache Coherence Problem. Shared Cache Architectures

Lecture 9 Outline. Lower-Level Protocol Choices. MESI (4-state) Invalidation Protocol. MESI: Processor-Initiated Transactions

Cache Coherence: Part 1

Module 10: "Design of Shared Memory Multiprocessors" Lecture 20: "Performance of Coherence Protocols" MOESI protocol.

L7 Shared Memory Multiprocessors. Shared Memory Multiprocessors

Lecture 11: Snooping Cache Coherence: Part II. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Cache Coherence in Bus-Based Shared Memory Multiprocessors

Shared Memory Architectures. Approaches to Building Parallel Machines

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Lecture 11: Cache Coherence: Part II. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

CS/ECE 757: Advanced Computer Architecture II (Parallel Computer Architecture) Symmetric Multiprocessors Part 1 (Chapter 5)

4 Chip Multiprocessors (I) Chip Multiprocessors (ACS MPhil) Robert Mullins

Alewife Messaging. Sharing of Network Interface. Alewife User-level event mechanism. CS252 Graduate Computer Architecture.

Page 1. SMP Review. Multiprocessors. Bus Based Coherence. Bus Based Coherence. Characteristics. Cache coherence. Cache coherence

CS252 Spring 2017 Graduate Computer Architecture. Lecture 12: Cache Coherence

Snooping coherence protocols (cont.)

Processor Architecture

Snooping coherence protocols (cont.)

Caches. Parallel Systems. Caches - Finding blocks - Caches. Parallel Systems. Parallel Systems. Lecture 3 1. Lecture 3 2

Lecture 3: Snooping Protocols. Topics: snooping-based cache coherence implementations

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

Parallel Computer Architecture Lecture 5: Cache Coherence. Chris Craik (TA) Carnegie Mellon University

Cache Coherence. (Architectural Supports for Efficient Shared Memory) Mainak Chaudhuri

Switch Gear to Memory Consistency

CS315A Midterm Solutions

Fall 2012 EE 6633: Architecture of Parallel Computers Lecture 4: Shared Address Multiprocessors Acknowledgement: Dave Patterson, UC Berkeley

ESE 545 Computer Architecture Symmetric Multiprocessors and Snoopy Cache Coherence Protocols CA SMP and cache coherence

Parallel Arch. Review

Shared Memory Architectures. Shared Memory Multiprocessors. Caches and Cache Coherence. Cache Memories. Cache Memories Write Operation

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

NOW Handout Page 1. Recap: Performance Trade-offs. Shared Memory Multiprocessors. Uniprocessor View. Recap (cont) What is a Multiprocessor?

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy

Lecture 24: Multiprocessing Computer Architecture and Systems Programming ( )

Lecture-22 (Cache Coherence Protocols) CS422-Spring

Computer Architecture

Snooping-Based Cache Coherence

Flynn s Classification

Multicore Workshop. Cache Coherency. Mark Bull David Henty. EPCC, University of Edinburgh

Basic Architecture of SMP. Shared Memory Multiprocessors. Cache Coherency -- The Problem. Cache Coherency, The Goal.

Cray XE6 Performance Workshop

Advanced OpenMP. Lecture 3: Cache Coherency

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy Prof. Sherief Reda School of Engineering Brown University

Bus-Based Coherent Multiprocessors

Lecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013

Review: Creating a Parallel Program. Programming for Performance

Cache Coherence. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

ECE7660 Parallel Computer Architecture. Shared Memory Multiprocessors

Limitations of parallel processing

Cache Coherence. Silvina Hanono Wachman Computer Science & Artificial Intelligence Lab M.I.T.

Multiprocessors 1. Outline

Shared Memory Multiprocessors

Shared Memory Multiprocessors. Symmetric Shared Memory Architecture (SMP) Cache Coherence. Cache Coherence Mechanism. Interconnection Network

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

Multiprocessor Cache Coherency. What is Cache Coherence?

Homework 6. BTW, This is your last homework. Assigned today, Tuesday, April 10 Due time: 11:59PM on Monday, April 23. CSCI 402: Computer Architectures

CSE502: Computer Architecture CSE 502: Computer Architecture

Lecture 20: Multi-Cache Designs. Spring 2018 Jason Tang

Parallel Computers. CPE 631 Session 20: Multiprocessors. Flynn s Tahonomy (1972) Why Multiprocessors?

Lecture 30: Multiprocessors Flynn Categories, Large vs. Small Scale, Cache Coherency Professor Randy H. Katz Computer Science 252 Spring 1996

Memory Hierarchy in a Multiprocessor

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Lecture 13: Memory Consistency. + Course-So-Far Review. Parallel Computer Architecture and Programming CMU /15-618, Spring 2014

Handout 3 Multiprocessor and thread level parallelism

12 Cache-Organization 1

Caches (Writing) Hakim Weatherspoon CS 3410, Spring 2012 Computer Science Cornell University. P & H Chapter 5.2 3, 5.5

Computer Systems Architecture

Programming as Successive Refinement. Partitioning for Performance

EC 513 Computer Architecture

CSC/ECE 506: Architecture of Parallel Computers Program 2: Bus-Based Cache Coherence Protocols Due: Wednesday, October 25, 2017

CS 152 Computer Architecture and Engineering. Lecture 19: Directory-Based Cache Protocols

ECE 669 Parallel Computer Architecture

3/13/2008 Csci 211 Lecture %/year. Manufacturer/Year. Processors/chip Threads/Processor. Threads/chip 3/13/2008 Csci 211 Lecture 8 4

Incoherent each cache copy behaves as an individual copy, instead of as the same memory location.

1. Memory technology & Hierarchy

Caches (Writing) P & H Chapter 5.2 3, 5.5. Hakim Weatherspoon CS 3410, Spring 2013 Computer Science Cornell University

Chapter 5. Thread-Level Parallelism

[ 5.4] What cache line size is performs best? Which protocol is best to use?

CS 136: Advanced Architecture. Review of Caches

Aleksandar Milenkovich 1

A three-state update protocol

Chap. 4 Multiprocessors and Thread-Level Parallelism

Cache Coherence and Atomic Operations in Hardware

COEN-4730 Computer Architecture Lecture 08 Thread Level Parallelism and Coherence

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2014 Lecture 14

Lecture 23: Thread Level Parallelism -- Introduction, SMP and Snooping Cache Coherence Protocol

CS 147: Computer Systems Performance Analysis

A Basic Snooping-Based Multi-Processor Implementation

Lecture 13: Memory Consistency. + course-so-far review (if time) Parallel Computer Architecture and Programming CMU /15-618, Spring 2017

Lecture 13. Shared memory: Architecture and programming

Lecture 13: Memory Consistency. + Course-So-Far Review. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

Logical Diagram of a Set-associative Cache Accessing a Cache

Introduction. Memory Hierarchy

Shared Memory SMP and Cache Coherence (cont) Adapted from UCB CS252 S01, Copyright 2001 USB

CS 152 Computer Architecture and Engineering. Lecture 19: Directory-Based Cache Protocols

Introduction to Multiprocessors (Part II) Cristina Silvano Politecnico di Milano

Caches. Hakim Weatherspoon CS 3410, Spring 2012 Computer Science Cornell University. See P&H 5.1, 5.2 (except writes)

CSE 5306 Distributed Systems

Transcription:

Recap Protocol Design Space of Snooping Cache Coherent ultiprocessors CS 28, Spring 99 David E. Culler Computer Science Division U.C. Berkeley Snooping cache coherence solve difficult problem by applying extra interpretation to naturally occuring events» state transitions, bus transactions write-thru cache» 2-state: invalid, valid» no new transaction, no new wires» coherence mechanism provides consistency, since all writes in bus order» poor performance Coherent memory system Sequential Consistency 2/12/99 CS28 S99 2 Sequential Consistency emory operations from a proc become visible (to itself and others) in program order There exist a total order, consistent with this partial order - i.e., an interleaving the position at which a write occurs in the hypothetical total order should be the same with respect to all processors Sufficient Conditions every process issues mem operations in program order after a write operation is issued, the issuing process waits for the write to complete before issuing next memory operation after a read is issued, the issuing process waits for the read to complete and for the write whose value is being returned to complete (gloabaly) befor issuing its next operation How can compilers violate SC? Architectural enhancements? 2/12/99 CS28 S99 3 Outline for Today Design Space of Snoopy-Cache Coherence Protocols write-back, update protocol design lower-level design choices ntroduction to Workload-driven evaluation Evaluation of protocol alternatives 2/12/99 CS28 S99 4 Write-back Caches 2 processor operations PrRd, PrWr 3 states invalid, valid (clean), modified (dirty) ownership: who supplies block 2 bus transactions: read (BusRd), write-back (BusWB) only cache-block transfers => treat Valid as shared and odified as exclusive => introduce one new bus transaction read-exclusive: read for purpose of PrRd/BusRd PrWr/BusRd modifying (read-to-own) 2/12/99 CS28 S99 PrW V Replace/- Replace/BusWB S nvalidate Protocol Read obtains block in shared even if only cache copy Obtain exclusive ownership before writing BusRdx causes others to invalidate (demote) f in another cache, will flush BusRdx even if hit in S» promote to (upgrade) What about replacement? S->, -> as before PrRd/BusRd 2/12/99 CS28 S99 6 S BusRdX/Flush BusRdX/ NOW Handout Page 1 CS28 S99 1

Example: Write-Back Protocol PrRd U P PrRd U P 1 P 4 U S U S 7 U S 7 BusRd U BusRd U /O devices BusRdx U u :7 BusRd emory Flush PrRd U PrWr U 7 Correctness When is write miss performed? How does writer observe write? How is it made visible to others? How do they observe the write? When is write hit made visible? 2/12/99 CS28 S99 7 2/12/99 CS28 S99 8 Write Serialization for Coherence Writes that appear on the bus (BusRdX) are ordered by bus performed in writer s cache before other transactions, so ordered same w.r.t. all processors (incl. writer) Read misses also ordered wrt these Write that don t appear on the bus: P issues BusRdX B. further mem operations on B until next transaction are from P» read and write hits» these are in program order for read or write from another processor» separated by intervening bus transaction Reads hits? 2/12/99 CS28 S99 9 Sequential Consistency Bus imposes total order on bus xactions for all locations Between xactions, procs perform reads/writes (locally) in program order So any execution defines a natural partial order j subsequent to i if» () follows in program order on same processor,» (ii) j generates bus xaction that follows the memory operation for i n segment between two bus transactions, any interleaving of local program orders leads to consistent total order w/i segment writes observed by proc P serialized as: Writes from other processors by the previous bus xaction P issued Writes from P by program order 2/12/99 CS28 S99 Sufficient conditions Sufficient Conditions issued in program order after write issues, the issuing process waits for the write to complete before issuing next memory operation after read is issues, the issuing process waits for the read to complete and for the write whose value is being returned to complete (gloabaly) befor issuing its next operation Write completion can detect when write appears on bus Write atomicity: if a read returns the value of a write, that write has already become visible to all others already 2/12/99 CS28 S99 11 BusRd observed in state: what transitition to make? ----> ----> S Depends on expectations of access patterns How does memory know whether or not to supply data on BusRd? Problem: Read/Write is 2 bus xactions, even if no sharing» BusRd (->S) followed by BusRdX or BusUpgr (S->)» What happens on sequential programs? 2/12/99 CS28 S99 12 NOW Handout Page 2 CS28 S99 2

ES (4-state) nvalidation Protocol Add exclusive state distinguish exclusive (writable) and owned (written) ain memory is up to date, so cache not necessarily owner can be written locally States invalid exclusive or exclusive-clean (only this cache has copy, but not modified) shared (two or more caches may have copies) modified (dirty) -> E on PrRd if no cache has copy => How can you tell? 2/12/99 CS28 S99 13 Hardware Support for ES P P 1 P 4 u : emory /O devices All cache controllers snoop on BusRd Assert shared if present (S? E??) ssuer chooses between S and E how does it know when all have voted? shared signal - wired-or 2/12/99 CS28 S99 14 ES State Transition Diagram BusRd(S) means shared line asserted on BusRd transaction Flush : if cache-to-cache xfers only one cache flushes data OES protocol: Owned state: exclusive but memory not valid PrRd/ BusRd (S ) PrRd/ BusRd(S) 2/12/99 CS28 S99 PrRd E S BusRd/ Flush BusRdX /Flush BusRdX /Flush BusRdX /Flush Who supplies data on miss when not in state: memory or cache? Original, lllinois ES: cache, since assumed faster than memory Not true in modern systems» ntervening in another cache more expensive than getting from memory Cache-to-cache sharing adds complexity How does memory know it should supply data (must wait for caches) Selection algorithm if multiple caches have valid data Valuable for cache-coherent machines with distributed memory ay be cheaper to obtain from nearby cache than distant memory, Especially when constructed out of SP nodes (Stanford DASH) 2/12/99 CS28 S99 16 Update Protocols f data is to be communicated between processors, invalidate protocols seem inefficient consider shared flag p waits for it to be zero, then does work and sets it one p1 waits for it to be one, then does work and sets it zero how many transactions? 2/12/99 CS28 S99 17 Dragon Write-back Update Protocol 4 states Exclusive-clean or exclusive (E): and memory have it Shared clean (Sc):, others, and maybe memory, but m not owner Shared modified (Sm): and others but not memory, and m the owner» Sm and Sc can coexist in different caches, with only one Sm odified or dirty (D): and, noone else No invalid state f in cache, cannot be invalid f not present in cache, view as being in not-present or invalid state New processor events: PrRdiss, PrWriss ntroduced to specify actions when block not present in cache New bus transaction: BusUpd Broadcasts single word written on bus; updates other relevant caches 2/12/99 CS28 S99 18 NOW Handout Page 3 CS28 S99 3

Dragon State Transition Diagram BusUpd/Update E Sc PrRdiss/BusRd(S) PrRdiss/BusRd(S) BusUpd/Update PrWriss/(BusRd(S); BusUpd) PrWriss/BusRd(S) Sm Can shared-modified state be eliminated? f update memory as well on BusUpd transactions (DEC Firefly) Dragon protocol doesn t (assumes DRA memory slow to update) Should replacement of an Sc block be broadcast? Would allow last copy to go to E state and not generate updates Replacement bus transaction is not in critical path, later update may be Can local copy be updated on write hit before controller gets bus? Can mess up serialization Coherence, consistency considerations much like write-through case 2/12/99 CS28 S99 19 2/12/99 CS28 S99 Assessing Protocol Tradeoffs Tradeoffs affected by technology characteristics and design complexity Part art and part science Art: experience, intuition and aesthetics of designers Science: Workload-driven evaluation for cost-performance» want a balanced system: no expensive resource heavily underutilized Workload-Driven Evaluation Evaluating real machines Evaluating an architectural idea or trade-offs => need good metrics of performance => need to pick good workloads => need to pay attention to scaling many factors involved Break? Today: narrow architectural comparison Set in wider context 2/12/99 CS28 S99 21 2/12/99 CS28 S99 22 Evaluation in Uniprocessors Decisions made only after quantitative evaluation For existing systems: comparison and procurement evaluation For future systems: careful extrapolation from known quantities Wide base of programs leads to standard benchmarks easured on wide range of machines and successive generations easurements and technology assessment lead to proposed features Then simulation Simulator developed that can run with and without a feature Benchmarks run through the simulator to obtain results Together with cost and complexity, decisions made ore Difficult for ultiprocessors What is a representative workload? Software model has not stabilized any architectural and application degrees of freedom Huge design space: no. of processors, other architectural, application mpact of these parameters and their interactions can be huge High cost of communication What are the appropriate metrics? Simulation is expensive Realistic configurations and sensitivity analysis difficult Larger design space, but more difficult to cover Understanding of parallel programs as workloads is critical Particularly interaction of application and architectural parameters 2/12/99 CS28 S99 23 2/12/99 CS28 S99 24 NOW Handout Page 4 CS28 S99 4

A Lot Depends on Sizes Application parameters and no. of procs affect inherent properties Load balance, communication, extra work, temporal and spatial locality nteractions with organization parameters of extended memory hierarchy affect artifactual communication and performance Effects often dramatic, sometimes small: application-dependent 3 2 N = 13 N = 28 N = 14 N = 1,26 ocean 1 4 7 13 16 19 22 2 28 31 Understanding size interactions and scaling relationships is key 2/12/99 CS28 S99 2 3 Origin 16 K Origin 64 K 2 Origin 12 K Challenge 16 K Challenge 12 K 1 4 7 13 16 19 22 2 28 31 Barnes-hut Scaling: Why Worry? Fixed problem size is limited Too small a problem: ay be appropriate for small machine Parallelism overheads begin to dominate benefits for larger machines» Load imbalance» Communication to computation ratio ay even achieve slowdowns Doesn t reflect real usage, and inappropriate for large machines» Can exaggerate benefits of architectural improvements, especially when measured as percentage improvement in performance Too large a problem Difficult to measure improvement (next) 2/12/99 CS28 S99 26 Too Large a Problem Suppose problem realistically large for big machine ay not fit in small machine Can t run Thrashing to disk Working set doesn t fit in cache Fits at some p, leading to superlinear speedup Real effect, but doesn t help evaluate effectiveness Finally, users want to scale problems as machines grow Can help avoid these problems 2/12/99 CS28 S99 27 Demonstrating Scaling Problems Small Ocean and big equation solver problems on SG Origin 3 2 deal Ocean: 28 x 28 1 3 7 9 11 13 17 19 21 23 2 27 29 31 2/12/99 CS28 S99 28 4 4 3 3 2 Grid solver: 12 K x 12 K deal 1 3 7 9 11 13 17 19 21 23 22729 31 Communication and Replication View parallel machine as extended memory hierarchy Local cache, local memory, remote memory Classify misses in cache at any level as for uniprocessors» compulsory or cold misses (no size effect)» capacity misses (yes)» conflict or collision misses (yes)» communication or coherence misses (no) Communication induced by finite capacity is most fundamental artifact Like cache size and miss rate or memory traffic in uniprocessors 2/12/99 CS28 S99 29 Working Set Perspective At a given level of the hierarchy (to the next further one) Data traffic First working set Capacity-generated traffic (including conflicts) Other capacity-independent communication nherent communication Cold-start (compulsory) traffic Second working set Replication capacity (cache size) Hierarchy of working sets At first level cache (fully assoc, one-word block), inherent to algorithm» working set curve for program Traffic from any type of miss can be local or nonlocal (communication) 2/12/99 CS28 S99 3 NOW Handout Page CS28 S99