COSC 6385 Computer Architecture. - Thread Level Parallelism (III)

Similar documents
COSC 6385 Computer Architecture - Thread Level Parallelism (III)

COSC 6385 Computer Architecture. - Multi-Processors (IV) Simultaneous multi-threading and multi-core processors

Lecture 24: Thread Level Parallelism -- Distributed Shared Memory and Directory-based Coherence Protocol

Shared Memory SMP and Cache Coherence (cont) Adapted from UCB CS252 S01, Copyright 2001 USB

CMSC 611: Advanced. Distributed & Shared Memory

COEN-4730 Computer Architecture Lecture 08 Thread Level Parallelism and Coherence

Review: Multiprocessor. CPE 631 Session 21: Multiprocessors (Part 2) Potential HW Coherency Solutions. Bus Snooping Topology

CMSC 611: Advanced Computer Architecture

Page 1. Lecture 12: Multiprocessor 2: Snooping Protocol, Directory Protocol, Synchronization, Consistency. Bus Snooping Topology

Aleksandar Milenkovic, Electrical and Computer Engineering University of Alabama in Huntsville. Review: Small-Scale Shared Memory

CPE 631 Lecture 21: Multiprocessors

Multi-core Architectures. Dr. Yingwu Zhu

CMSC 411 Computer Systems Architecture Lecture 21 Multiprocessors 3

5008: Computer Architecture

ECSE 425 Lecture 30: Directory Coherence

Introduction to Multiprocessors (Part II) Cristina Silvano Politecnico di Milano

Computer Architecture Lecture 10: Thread Level Parallelism II (Chapter 5) Chih Wei Liu 劉志尉 National Chiao Tung University

Aleksandar Milenkovich 1

Multi-core Architectures. Dr. Yingwu Zhu

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

CS654 Advanced Computer Architecture Lec 14 Directory Based Multiprocessors

Aleksandar Milenkovic, Electrical and Computer Engineering University of Alabama in Huntsville

Thread- Level Parallelism. ECE 154B Dmitri Strukov

Lecture 17: Multiprocessors: Size, Consitency. Review: Networking Summary

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

ESE 545 Computer Architecture Symmetric Multiprocessors and Snoopy Cache Coherence Protocols CA SMP and cache coherence

Directory Implementation. A High-end MP

Chapter 6: Multiprocessors and Thread-Level Parallelism

CISC 662 Graduate Computer Architecture Lectures 15 and 16 - Multiprocessors and Thread-Level Parallelism

CSE 502 Graduate Computer Architecture Lec 19 Directory-Based Shared-Memory Multiprocessors & MP Synchronization

Lecture 30: Multiprocessors Flynn Categories, Large vs. Small Scale, Cache Coherency Professor Randy H. Katz Computer Science 252 Spring 1996

Page 1. Instruction-Level Parallelism (ILP) CISC 662 Graduate Computer Architecture Lectures 16 and 17 - Multiprocessors and Thread-Level Parallelism

Flynn s Classification

EC 513 Computer Architecture

Parallel Computers. CPE 631 Session 20: Multiprocessors. Flynn s Tahonomy (1972) Why Multiprocessors?

CSE 392/CS 378: High-performance Computing - Principles and Practice

Chapter 5. Multiprocessors and Thread-Level Parallelism

Lecture 8: Snooping and Directory Protocols. Topics: split-transaction implementation details, directory implementations (memory- and cache-based)

Computer Architecture

Parallel Architecture. Hwansoo Han

Review. EECS 252 Graduate Computer Architecture. Lec 13 Snooping Cache and Directory Based Multiprocessors. Outline. Challenges of Parallel Processing

Lecture 5: Directory Protocols. Topics: directory-based cache coherence implementations

Multiprocessor Cache Coherency. What is Cache Coherence?

Multiprocessors & Thread Level Parallelism

Parallel Computer Architecture Spring Distributed Shared Memory Architectures & Directory-Based Memory Coherence

3/13/2008 Csci 211 Lecture %/year. Manufacturer/Year. Processors/chip Threads/Processor. Threads/chip 3/13/2008 Csci 211 Lecture 8 4

Multiprocessors: Basics, Cache coherence, Synchronization, and Memory consistency

Lecture 2: Snooping and Directory Protocols. Topics: Snooping wrap-up and directory implementations

Chapter 5. Multiprocessors and Thread-Level Parallelism

Multi-threaded processors. Hung-Wei Tseng x Dean Tullsen

Foundations of Computer Systems

Lecture 23: Thread Level Parallelism -- Introduction, SMP and Snooping Cache Coherence Protocol

COMP Parallel Computing. CC-NUMA (1) CC-NUMA implementation

??%/year. ! All microprocessor companies switch to MP (2X CPUs / 2 yrs) " Procrastination penalized: 2X sequential perf. / 5 yrs

Lecture 25: Multiprocessors. Today s topics: Snooping-based cache coherence protocol Directory-based cache coherence protocol Synchronization

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

Cache Coherence. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Cache Coherence. Bryan Mills, PhD. Slides provided by Rami Melhem

Multiprocessors. Flynn Taxonomy. Classifying Multiprocessors. why would you want a multiprocessor? more is better? Cache Cache Cache.

COSC 6374 Parallel Computation. Parallel Computer Architectures

COSC 6374 Parallel Computation. Parallel Computer Architectures

Shared Memory Multiprocessors. Symmetric Shared Memory Architecture (SMP) Cache Coherence. Cache Coherence Mechanism. Interconnection Network

Introducing Multi-core Computing / Hyperthreading

MULTIPROCESSORS AND THREAD-LEVEL. B649 Parallel Architectures and Programming

MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming

Scalable Cache Coherence

Multiprocessor Systems

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU /15-618, Spring 2015

CS 152 Computer Architecture and Engineering. Lecture 19: Directory-Based Cache Protocols

Lecture 3: Directory Protocol Implementations. Topics: coherence vs. msg-passing, corner cases in directory protocols

A Scalable SAS Machine

COSC4201 Multiprocessors

CSC 631: High-Performance Computer Architecture

CS 152 Computer Architecture and Engineering

Advanced Parallel Programming I

Fall 2012 EE 6633: Architecture of Parallel Computers Lecture 4: Shared Address Multiprocessors Acknowledgement: Dave Patterson, UC Berkeley

Thread-Level Parallelism

AMD Opteron 4200 Series Processor

Limitations of parallel processing

Multiprocessors and Thread Level Parallelism CS 282 KAUST Spring 2010 Muhamed Mudawar

Lecture 25: Multiprocessors

Cache Coherence in Scalable Machines

Multiprocessors 1. Outline

Scalable Multiprocessors

Computer Systems Architecture

CS 152 Computer Architecture and Engineering. Lecture 19: Directory-Based Cache Protocols

Overview: Shared Memory Hardware. Shared Address Space Systems. Shared Address Space and Shared Memory Computers. Shared Memory Hardware

Overview: Shared Memory Hardware

Modern CPU Architectures

Module 9: Addendum to Module 6: Shared Memory Multiprocessors Lecture 17: Multiprocessor Organizations and Cache Coherence. The Lecture Contains:

Commercially Available Chip Mul3processors for Research. Welcome to the MulE core Era

Lecture 18: Coherence and Synchronization. Topics: directory-based coherence protocols, synchronization primitives (Sections

Scalable Cache Coherent Systems

Handout 3 Multiprocessor and thread level parallelism

Lecture 18: Coherence Protocols. Topics: coherence protocols for symmetric and distributed shared-memory multiprocessors (Sections

FinisTerrae: Memory Hierarchy and Mapping

Parallel Computer Architecture Spring Shared Memory Multiprocessors Memory Coherence

Comp. Org II, Spring

Parallel Processing & Multicore computers

Chapter 6. Parallel Processors from Client to Cloud Part 2 COMPUTER ORGANIZATION AND DESIGN. Homogeneous & Heterogeneous Multicore Architectures

Transcription:

OS 6385 omputer Architecture - Thread Level Parallelism (III) Spring 2013 Some slides are based on a lecture by David uller, University of alifornia, Berkley http://www.eecs.berkeley.edu/~culler/courses/cs252-s05 Larger Shared Memory Systems Typically Distributed Shared Memory Systems Local or remote memory access via memory controller Directory per cache that tracks state of every block in every cache Which caches have a copy of block, dirty vs. clean,... Info per memory block vs. per cache block? PLUS: In memory => simpler protocol (centralized/one location) MINUS: In memory => directory is ƒ(memory size) vs. ƒ(cache size) Prevent directory as bottleneck? distribute directory entries with memory, each keeping track of which Procs have copies of their blocks 1

Distributed Directory MPs Distributed Shared Memory Systems 2

Memory Memory AMD 8350 quad-core Opteron process Single processor configuration Private L1 cache: 32 KB data, 32 KB instruction Private L2 cache: 512 KB unified Shared L3 cache: 2 MB unified entralized shared memory system ore ore ore ore L1 L2 3 Hypertransports L1 L1 L2 L2 shared L3 crossbar L1 L2 2 Mem. ontroller AMD 8350 quad-core Opteron Multi-processor configuration Distributed shared memory system 0 1 Socket 0 Socket 1 2 L3 3 HT HT HT 8 GB/s 4 5 6 L3 7 HT HT HT Memory 8 GB/s 8 GB/s HT HT HT 8 9 L3 10 11 8 GB/s HT HT HT 12 13 L3 14 15 Memory Socket 2 Socket 3 3

Programming distributed shared memory systems Programmers must use threads or processes Spread the workload across multiple cores Write parallel algorithms OS will map threads/processes to cores True concurrency, not just uni-processor time-slicing Pre-emptive context switching: context switch can happen at any time oncurrency bugs exposed much faster with multi-core Slide based on a lecture of Jernej Barbic, MIT, http://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppt Programming distributed shared memory systems Each thread/process has an affinity mask Specifies what cores the thread is allowed to run on Different threads can have different masks Affinities are inherited across fork() Example: 4-way multi-core, without SMT 1 1 0 1 core 3 core 2 core 1 core 0 Process/thread is allowed to run on cores 0,2,3, but not on core 1 Slide based on a lecture of Jernej Barbic, MIT, http://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppt 4

Process migration is costly Default Affinities Default affinity mask is all 1s: all threads can run on all processors and cores OS scheduler decides which thread runs on which core OS scheduler detects skewed workloads, migrating threads to less busy processors Soft affinity: Tendency of a scheduler to try to keep processes on the same PU as long as possible Hard affinity: Affinity information has been explicitly set by application OS has to adhere to this setting Linux Kernel scheduler API Retrieve the current affinity mask of a process #include <sys/types.h> #include <sched.h> #include <unistd.h> #include <errno.h> unsigned int len = sizeof(cpu_set_t); cpu_set_t mask; pid_t pid = getpid();/* get the process id of this app */ ret = sched_getaffinity (pid, len, &mask); if ( ret!= 0 ) printf( Error in getaffinity %d (%s)\n, errno, strerror(errno); for (i=0; i<numpus; i++) { if ( PU_ISSET(i, &mask) ) printf( Process could run on PU %d\n, i); } 5

Linux Kernel scheduler API (II) Set the affinity mask of a process unsigned int len = sizeof(cpu_set_t); cpu_set_t mask; pid_t pid = getpid();/* get the process id of this app */ /* clear the mask */ PU_ZERO (&mask); /* set the mask such that the process is only allowed to execute on the desired PU */ PU_SET ( cpu_id, &mask); ret = sched_setaffinity (pid, len, &mask); if ( ret!= 0 ) { printf( Error in setaffinity %d (%s)\n, errno, strerror(errno); } Linux Kernel scheduler API (III) Setting thread-related affinity information Use sched_setaffinity with a pid = 0 hanges the affinity settings for this thread only Use libnuma functionality numa_run_on_node(); numa_run_on_node_mask(); Modifying affinity information based on PU sockets, not on cores Use pthread functions on most linux systems #define USE_GNU pthread_setaffinity_np(thread_t t, len, mask); pthread_attr_setaffinity_np ( thread_attr_t a, len, mask); 6

Directory based ache oherence Protocol Similar to Snoopy Protocol: Three states Shared: 1 processors have data, memory up-to-date Uncached (no processor has it; not valid in any cache) Exclusive: 1 processor (owner) has data; memory out-of-date In addition to cache state, must track which processors have data when in the shared state (usually bit vector, 1 if processor has copy) Assumptions: Writes to non-exclusive data => write miss Processor blocks until access completes Assume messages received and acted upon in order sent Directory Protocol No bus and don t want to broadcast: interconnect no longer single arbitration point all messages have explicit responses Terms: typically 3 processors involved Local node where a request originates Home node where the memory location of an address resides Remote node has a copy of a cache block, whether exclusive or shared Example messages on next slide: P = processor number, A = address 7

Directory Protocol Messages Message type Source Destination Msg ontent Read miss Local cache Home directory P, A Processor P reads data at address A; make P a read sharer and arrange to send data back Write miss Local cache Home directory P, A Processor P writes data at address A; make P the exclusive owner and arrange to send data back Invalidate Home directory Remote caches A Invalidate a shared copy at address A. Fetch Home directory Remote cache A Fetch the block at address A and send it to its home directory Fetch/Invalidate Home directory Remote cache A Fetch the block at address A and send it to its home directory; invalidate the block in the cache Data value reply Home directory Local cache Data Return a data value from the home memory (read miss response) Data write-back Remote cache Home directory A, Data Write-back a data value for address A (invalidate response) State Transition Diagram for an Individual ache Block in a Directory Based System States identical to snoopy case; transactions very similar. Transitions caused by read misses, write misses, invalidates, data fetch requests Generates read miss & write miss msg to home directory. Write misses that were broadcast on the bus for snooping => explicit invalidate & data fetch requests. Note: on a write, a cache block is bigger, so need to read the full cache block 8

PU -ache State Machine PU Read hit State machine for PU requests for each memory block Invalid state if in memory Fetch/Invalidate send Data Write Back message to home directory PU read hit PU write hit Invalid Exclusive (read/writ) Invalidate PU Read Send Read Miss message PU Write: Send Write Miss msg to h.d. Shared (read/only) PU Write:Send Write Miss message to home directory Fetch: send Data Write Back message to home directory PU read miss: send Data Write Back message and read miss to home directory PU write miss: send Data Write Back message and Write Miss to home directory State Transition Diagram for the Directory Same states & structure as the transition diagram for an individual cache 2 actions: update of directory state & send msgs to satisfy requests Tracks all copies of memory block. Also indicates an action that updates the sharing set, Sharers, as well as sending a message. 9

State machine for Directory requests for each memory block Uncached state if in memory Write Miss: Sharers = {P}; send Fetch/Invalidate; send Data Value Reply Directory State Machine Data Write Back: Sharers = {} (Write back block) msg to remote cache Uncached Exclusive (read/writ) Read miss: Sharers = {P} send Data Value Reply Write Miss: Sharers = {P}; send Data Value Reply msg Shared (read only) Read miss: Sharers += {P}; send Fetch; send Data Value Reply msg to remote cache (Write back block) Write Miss: send Invalidate to Sharers; then Sharers = {P}; send Data Value Reply msg Example Directory Protocol Message sent to directory causes two actions: Update the directory More messages to satisfy request Block is in Uncached state: the copy in memory is the current value; only possible requests for that block are: Read miss: requesting processor sent data from memory &requestor made only sharing node; state of block made Shared. Write miss: requesting processor is sent the value & becomes the Sharing node. The block is made Exclusive to indicate that the only valid copy is cached. Sharers indicates the identity of the owner. Block is Shared => the memory value is up-to-date: Read miss: requesting processor is sent back the data from memory & requesting processor is added to the sharing set. Write miss: requesting processor is sent the value. All processors in the set Sharers are sent invalidate messages, & Sharers is set to identity of requesting processor. The state of the block is made Exclusive. 10

Example Directory Protocol Block is Exclusive: current value of the block is held in the cache of the processor identified by the set Sharers (the owner) => three possible directory requests: Read miss: owner processor sent data fetch message, causing state of block in owner s cache to transition to Shared and causes owner to send data to directory, where it is written to memory & sent back to requesting processor. Identity of requesting processor is added to set Sharers, which still contains the identity of the processor that was the owner (since it still has a readable copy). State is shared. Data write-back: owner processor is replacing the block and hence must write it back, making memory copy up-to-date (the home directory essentially becomes the owner), the block is now Uncached, and the Sharer set is empty. Write miss: block has a new owner. A message is sent to old owner causing the cache to send the value of the block to the directory from which it is sent to the requesting processor, which becomes the new owner. Sharers is set to identity of new owner, and state of block is made Exclusive. Example Processor 1 Processor 2 Interconnect Directory Memory step P1: Write 10 to A1 P1 P2 Bus Directory Memory State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Read A1 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 A1 and A2 map to the same cache block 11

Example Processor 1 Processor 2 Interconnect Directory Memory P1 P2 Bus Directory Memory step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 A1 and A2 map to the same cache block Example Processor 1 Processor 2 Interconnect Directory Memory P1 P2 Bus Directory Memory step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 A1 and A2 map to the same cache block 12

Example Processor 1 Processor 2 Interconnect Directory Memory P1 P2 Bus Directory Memory step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 Ftch P1 A1 10 10 Shar. A1 10 DaRp P2 A1 10 A1 Shar. P1,P2} 10 P2: Write 20 to A1 10 10 P2: Write 40 to A2 10 Write Back A1 and A2 map to the same cache block Example Processor 1 Processor 2 Interconnect Directory Memory P1 P2 Bus Directory Memory step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 Ftch P1 A1 10 10 Shar. A1 10 DaRp P2 A1 10 A1 Shar. P1,P2} 10 P2: Write 20 to A1 Excl. A1 20 WrMs P2 A1 10 Inv. Inval. P1 A1 A1 Excl. {P2} 10 P2: Write 40 to A2 10 A1 and A2 map to the same cache block 13

Example Processor 1 Processor 2 Interconnect Directory Memory P1 P2 Bus Directory Memory step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 Ftch P1 A1 10 A1 10 Shar. A1 10 DaRp P2 A1 10 A1 Shar. P1,P2} 10 P2: Write 20 to A1 Excl. A1 20 WrMs P2 A1 10 Inv. Inval. P1 A1 A1 Excl. {P2} 10 P2: Write 40 to A2 WrMs P2 A2 A2 Excl. {P2} 0 WrBk P2 A1 20 A1 Unca. {} 20 Excl. A2 40 DaRp P2 A2 0 A2 Excl. {P2} 0 A1 and A2 map to the same cache block Implementing a Directory We assume operations atomic, but they are not; reality is much harder; must avoid deadlock when run out of buffers in network Optimizations: read miss or write miss in Exclusive: send data directly to requestor from owner vs. 1st to memory and then from memory to requestor 14

Intel Sandy Bridge Architecture Newest generation of Intel Architecture Desktop version integrates regular processor and graphics cards on one chip Intel Sandy-Bridge Sandy Bridge now contains mem. ontroller, QTI, and graphics processor on chip AMD first integrated memory controller and HTI on the chip Instruction fetch: decoding variable length uops is complex and expensive Sandy Bridge introduces a uops cache: a hit in the uop cache will bypass decoding logic Uop cache is organized into 32sets, each 8 way, 6 uops per set Included physically in the L1 cache Predicted address will probe uop cache: if found, instruction bypass decoding step 15

Intel Sandy Bridge All 256bit AVX instructions can execute as a single uop In contrary to AMD, where they are broken down to 2 128 bit AVX instructions FP data path is however only 128 bits wide on SB Functional units are grouped into three domain: Integer, SIMD integer and FP Free bypassing within each domain, but a 1-2 cc penalty for instructions bypassing between the different domains Simplifies the forwarding logic between the domains for rarely used situations Intel Sandy Bridge A ring interconnects the cores, graphics, and L3 cache composed of four different rings: request, snoop, acknowledge and a 32B wide data ring. responsible for a distributed communication protocol that enforces coherency and ordering. Source: http://www.realworldtech.com/page.cfm?articleid=rwt091810191937 16

AMD Istanbul/Magny-ours processor Source: http://www.phys.uu.nl/~euroben/reports/web10/amd.php AMD Interlagos Processor First generation of the new Bulldozer architecture Two cores form a module Each module share an L1I cache, floating point unit (FPU) and L2 cache, saves area and power to pack in more cores and attain higher throughput Leads to degradation in terms of per-core performance. All modules in a chip share the L3 cache 17

AMD Interlagos Processor Source: http://www.realworldtech.com/page.cfm?articleid=rwt082610181333 18