OS 6385 omputer Architecture - Thread Level Parallelism (III) Spring 2013 Some slides are based on a lecture by David uller, University of alifornia, Berkley http://www.eecs.berkeley.edu/~culler/courses/cs252-s05 Larger Shared Memory Systems Typically Distributed Shared Memory Systems Local or remote memory access via memory controller Directory per cache that tracks state of every block in every cache Which caches have a copy of block, dirty vs. clean,... Info per memory block vs. per cache block? PLUS: In memory => simpler protocol (centralized/one location) MINUS: In memory => directory is ƒ(memory size) vs. ƒ(cache size) Prevent directory as bottleneck? distribute directory entries with memory, each keeping track of which Procs have copies of their blocks 1
Distributed Directory MPs Distributed Shared Memory Systems 2
Memory Memory AMD 8350 quad-core Opteron process Single processor configuration Private L1 cache: 32 KB data, 32 KB instruction Private L2 cache: 512 KB unified Shared L3 cache: 2 MB unified entralized shared memory system ore ore ore ore L1 L2 3 Hypertransports L1 L1 L2 L2 shared L3 crossbar L1 L2 2 Mem. ontroller AMD 8350 quad-core Opteron Multi-processor configuration Distributed shared memory system 0 1 Socket 0 Socket 1 2 L3 3 HT HT HT 8 GB/s 4 5 6 L3 7 HT HT HT Memory 8 GB/s 8 GB/s HT HT HT 8 9 L3 10 11 8 GB/s HT HT HT 12 13 L3 14 15 Memory Socket 2 Socket 3 3
Programming distributed shared memory systems Programmers must use threads or processes Spread the workload across multiple cores Write parallel algorithms OS will map threads/processes to cores True concurrency, not just uni-processor time-slicing Pre-emptive context switching: context switch can happen at any time oncurrency bugs exposed much faster with multi-core Slide based on a lecture of Jernej Barbic, MIT, http://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppt Programming distributed shared memory systems Each thread/process has an affinity mask Specifies what cores the thread is allowed to run on Different threads can have different masks Affinities are inherited across fork() Example: 4-way multi-core, without SMT 1 1 0 1 core 3 core 2 core 1 core 0 Process/thread is allowed to run on cores 0,2,3, but not on core 1 Slide based on a lecture of Jernej Barbic, MIT, http://people.csail.mit.edu/barbic/multi-core-15213-sp07.ppt 4
Process migration is costly Default Affinities Default affinity mask is all 1s: all threads can run on all processors and cores OS scheduler decides which thread runs on which core OS scheduler detects skewed workloads, migrating threads to less busy processors Soft affinity: Tendency of a scheduler to try to keep processes on the same PU as long as possible Hard affinity: Affinity information has been explicitly set by application OS has to adhere to this setting Linux Kernel scheduler API Retrieve the current affinity mask of a process #include <sys/types.h> #include <sched.h> #include <unistd.h> #include <errno.h> unsigned int len = sizeof(cpu_set_t); cpu_set_t mask; pid_t pid = getpid();/* get the process id of this app */ ret = sched_getaffinity (pid, len, &mask); if ( ret!= 0 ) printf( Error in getaffinity %d (%s)\n, errno, strerror(errno); for (i=0; i<numpus; i++) { if ( PU_ISSET(i, &mask) ) printf( Process could run on PU %d\n, i); } 5
Linux Kernel scheduler API (II) Set the affinity mask of a process unsigned int len = sizeof(cpu_set_t); cpu_set_t mask; pid_t pid = getpid();/* get the process id of this app */ /* clear the mask */ PU_ZERO (&mask); /* set the mask such that the process is only allowed to execute on the desired PU */ PU_SET ( cpu_id, &mask); ret = sched_setaffinity (pid, len, &mask); if ( ret!= 0 ) { printf( Error in setaffinity %d (%s)\n, errno, strerror(errno); } Linux Kernel scheduler API (III) Setting thread-related affinity information Use sched_setaffinity with a pid = 0 hanges the affinity settings for this thread only Use libnuma functionality numa_run_on_node(); numa_run_on_node_mask(); Modifying affinity information based on PU sockets, not on cores Use pthread functions on most linux systems #define USE_GNU pthread_setaffinity_np(thread_t t, len, mask); pthread_attr_setaffinity_np ( thread_attr_t a, len, mask); 6
Directory based ache oherence Protocol Similar to Snoopy Protocol: Three states Shared: 1 processors have data, memory up-to-date Uncached (no processor has it; not valid in any cache) Exclusive: 1 processor (owner) has data; memory out-of-date In addition to cache state, must track which processors have data when in the shared state (usually bit vector, 1 if processor has copy) Assumptions: Writes to non-exclusive data => write miss Processor blocks until access completes Assume messages received and acted upon in order sent Directory Protocol No bus and don t want to broadcast: interconnect no longer single arbitration point all messages have explicit responses Terms: typically 3 processors involved Local node where a request originates Home node where the memory location of an address resides Remote node has a copy of a cache block, whether exclusive or shared Example messages on next slide: P = processor number, A = address 7
Directory Protocol Messages Message type Source Destination Msg ontent Read miss Local cache Home directory P, A Processor P reads data at address A; make P a read sharer and arrange to send data back Write miss Local cache Home directory P, A Processor P writes data at address A; make P the exclusive owner and arrange to send data back Invalidate Home directory Remote caches A Invalidate a shared copy at address A. Fetch Home directory Remote cache A Fetch the block at address A and send it to its home directory Fetch/Invalidate Home directory Remote cache A Fetch the block at address A and send it to its home directory; invalidate the block in the cache Data value reply Home directory Local cache Data Return a data value from the home memory (read miss response) Data write-back Remote cache Home directory A, Data Write-back a data value for address A (invalidate response) State Transition Diagram for an Individual ache Block in a Directory Based System States identical to snoopy case; transactions very similar. Transitions caused by read misses, write misses, invalidates, data fetch requests Generates read miss & write miss msg to home directory. Write misses that were broadcast on the bus for snooping => explicit invalidate & data fetch requests. Note: on a write, a cache block is bigger, so need to read the full cache block 8
PU -ache State Machine PU Read hit State machine for PU requests for each memory block Invalid state if in memory Fetch/Invalidate send Data Write Back message to home directory PU read hit PU write hit Invalid Exclusive (read/writ) Invalidate PU Read Send Read Miss message PU Write: Send Write Miss msg to h.d. Shared (read/only) PU Write:Send Write Miss message to home directory Fetch: send Data Write Back message to home directory PU read miss: send Data Write Back message and read miss to home directory PU write miss: send Data Write Back message and Write Miss to home directory State Transition Diagram for the Directory Same states & structure as the transition diagram for an individual cache 2 actions: update of directory state & send msgs to satisfy requests Tracks all copies of memory block. Also indicates an action that updates the sharing set, Sharers, as well as sending a message. 9
State machine for Directory requests for each memory block Uncached state if in memory Write Miss: Sharers = {P}; send Fetch/Invalidate; send Data Value Reply Directory State Machine Data Write Back: Sharers = {} (Write back block) msg to remote cache Uncached Exclusive (read/writ) Read miss: Sharers = {P} send Data Value Reply Write Miss: Sharers = {P}; send Data Value Reply msg Shared (read only) Read miss: Sharers += {P}; send Fetch; send Data Value Reply msg to remote cache (Write back block) Write Miss: send Invalidate to Sharers; then Sharers = {P}; send Data Value Reply msg Example Directory Protocol Message sent to directory causes two actions: Update the directory More messages to satisfy request Block is in Uncached state: the copy in memory is the current value; only possible requests for that block are: Read miss: requesting processor sent data from memory &requestor made only sharing node; state of block made Shared. Write miss: requesting processor is sent the value & becomes the Sharing node. The block is made Exclusive to indicate that the only valid copy is cached. Sharers indicates the identity of the owner. Block is Shared => the memory value is up-to-date: Read miss: requesting processor is sent back the data from memory & requesting processor is added to the sharing set. Write miss: requesting processor is sent the value. All processors in the set Sharers are sent invalidate messages, & Sharers is set to identity of requesting processor. The state of the block is made Exclusive. 10
Example Directory Protocol Block is Exclusive: current value of the block is held in the cache of the processor identified by the set Sharers (the owner) => three possible directory requests: Read miss: owner processor sent data fetch message, causing state of block in owner s cache to transition to Shared and causes owner to send data to directory, where it is written to memory & sent back to requesting processor. Identity of requesting processor is added to set Sharers, which still contains the identity of the processor that was the owner (since it still has a readable copy). State is shared. Data write-back: owner processor is replacing the block and hence must write it back, making memory copy up-to-date (the home directory essentially becomes the owner), the block is now Uncached, and the Sharer set is empty. Write miss: block has a new owner. A message is sent to old owner causing the cache to send the value of the block to the directory from which it is sent to the requesting processor, which becomes the new owner. Sharers is set to identity of new owner, and state of block is made Exclusive. Example Processor 1 Processor 2 Interconnect Directory Memory step P1: Write 10 to A1 P1 P2 Bus Directory Memory State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Read A1 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 A1 and A2 map to the same cache block 11
Example Processor 1 Processor 2 Interconnect Directory Memory P1 P2 Bus Directory Memory step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 A1 and A2 map to the same cache block Example Processor 1 Processor 2 Interconnect Directory Memory P1 P2 Bus Directory Memory step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 P2: Write 20 to A1 P2: Write 40 to A2 A1 and A2 map to the same cache block 12
Example Processor 1 Processor 2 Interconnect Directory Memory P1 P2 Bus Directory Memory step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 Ftch P1 A1 10 10 Shar. A1 10 DaRp P2 A1 10 A1 Shar. P1,P2} 10 P2: Write 20 to A1 10 10 P2: Write 40 to A2 10 Write Back A1 and A2 map to the same cache block Example Processor 1 Processor 2 Interconnect Directory Memory P1 P2 Bus Directory Memory step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 Ftch P1 A1 10 10 Shar. A1 10 DaRp P2 A1 10 A1 Shar. P1,P2} 10 P2: Write 20 to A1 Excl. A1 20 WrMs P2 A1 10 Inv. Inval. P1 A1 A1 Excl. {P2} 10 P2: Write 40 to A2 10 A1 and A2 map to the same cache block 13
Example Processor 1 Processor 2 Interconnect Directory Memory P1 P2 Bus Directory Memory step State Addr ValueState Addr ValueAction Proc. Addr Value Addr State {Procs} Value P1: Write 10 to A1 WrMs P1 A1 A1 Ex {P1} Excl. A1 10 DaRp P1 A1 0 P1: Read A1 Excl. A1 10 P2: Read A1 Shar. A1 RdMs P2 A1 Shar. A1 10 Ftch P1 A1 10 A1 10 Shar. A1 10 DaRp P2 A1 10 A1 Shar. P1,P2} 10 P2: Write 20 to A1 Excl. A1 20 WrMs P2 A1 10 Inv. Inval. P1 A1 A1 Excl. {P2} 10 P2: Write 40 to A2 WrMs P2 A2 A2 Excl. {P2} 0 WrBk P2 A1 20 A1 Unca. {} 20 Excl. A2 40 DaRp P2 A2 0 A2 Excl. {P2} 0 A1 and A2 map to the same cache block Implementing a Directory We assume operations atomic, but they are not; reality is much harder; must avoid deadlock when run out of buffers in network Optimizations: read miss or write miss in Exclusive: send data directly to requestor from owner vs. 1st to memory and then from memory to requestor 14
Intel Sandy Bridge Architecture Newest generation of Intel Architecture Desktop version integrates regular processor and graphics cards on one chip Intel Sandy-Bridge Sandy Bridge now contains mem. ontroller, QTI, and graphics processor on chip AMD first integrated memory controller and HTI on the chip Instruction fetch: decoding variable length uops is complex and expensive Sandy Bridge introduces a uops cache: a hit in the uop cache will bypass decoding logic Uop cache is organized into 32sets, each 8 way, 6 uops per set Included physically in the L1 cache Predicted address will probe uop cache: if found, instruction bypass decoding step 15
Intel Sandy Bridge All 256bit AVX instructions can execute as a single uop In contrary to AMD, where they are broken down to 2 128 bit AVX instructions FP data path is however only 128 bits wide on SB Functional units are grouped into three domain: Integer, SIMD integer and FP Free bypassing within each domain, but a 1-2 cc penalty for instructions bypassing between the different domains Simplifies the forwarding logic between the domains for rarely used situations Intel Sandy Bridge A ring interconnects the cores, graphics, and L3 cache composed of four different rings: request, snoop, acknowledge and a 32B wide data ring. responsible for a distributed communication protocol that enforces coherency and ordering. Source: http://www.realworldtech.com/page.cfm?articleid=rwt091810191937 16
AMD Istanbul/Magny-ours processor Source: http://www.phys.uu.nl/~euroben/reports/web10/amd.php AMD Interlagos Processor First generation of the new Bulldozer architecture Two cores form a module Each module share an L1I cache, floating point unit (FPU) and L2 cache, saves area and power to pack in more cores and attain higher throughput Leads to degradation in terms of per-core performance. All modules in a chip share the L3 cache 17
AMD Interlagos Processor Source: http://www.realworldtech.com/page.cfm?articleid=rwt082610181333 18