250P: Computer Systems Architecture. Lecture 14: Synchronization. Anton Burtsev March, 2019

Similar documents
Lecture 19: Coherence and Synchronization. Topics: synchronization primitives (Sections )

Lecture: Coherence and Synchronization. Topics: synchronization primitives, consistency models intro (Sections )

Lecture: Coherence, Synchronization. Topics: directory-based coherence, synchronization primitives (Sections )

Lecture 18: Coherence and Synchronization. Topics: directory-based coherence protocols, synchronization primitives (Sections

CS5460/6460: Operating Systems. Lecture 14: Scalability techniques. Anton Burtsev February, 2014

M4 Parallelism. Implementation of Locks Cache Coherence

CS5460/6460: Operating Systems. Lecture 11: Locking. Anton Burtsev February, 2014

Scalable Locking. Adam Belay

Multiprocessor Synchronization

Lecture 9: Multiprocessor OSs & Synchronization. CSC 469H1F Fall 2006 Angela Demke Brown

Lecture 25: Multiprocessors

Synchronization. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Synchronization. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Synchronization. Coherency protocols guarantee that a reading processor (thread) sees the most current update to shared data.

MULTIPROCESSORS AND THREAD LEVEL PARALLELISM

Chapter 5. Multiprocessors and Thread-Level Parallelism

Flynn's Classification

CSC2/458 Parallel and Distributed Systems Scalable Synchronization

Advance Operating Systems (CS202) Locks Discussion

Chapter 8. Multiprocessors. In-Cheol Park Dept. of EE, KAIST

The need for atomicity This code sequence illustrates the need for atomicity. Explain.

Parallel Computer Architecture Spring Distributed Shared Memory Architectures & Directory-Based Memory Coherence

Lecture 19: Synchronization. CMU : Parallel Computer Architecture and Programming (Spring 2012)

Computer Architecture. A Quantitative Approach, Fifth Edition. Chapter 5. Multiprocessors and Thread-Level Parallelism

The MESI State Transition Graph

Algorithms for Scalable Synchronization on Shared Memory Multiprocessors by John M. Mellor Crummey Michael L. Scott

Chapter 5. Multiprocessors and Thread-Level Parallelism

Chapter 5. Thread-Level Parallelism

Lecture 33: Multiprocessors Synchronization and Consistency Professor Randy H. Katz Computer Science 252 Spring 1996

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy Prof. Sherief Reda School of Engineering Brown University

Multiprocessors II: CC-NUMA DSM. CC-NUMA for Large Systems

Lecture 18: Coherence Protocols. Topics: coherence protocols for symmetric and distributed shared-memory multiprocessors (Sections

Distributed Systems Synchronization. Marcus Völp 2007

Multiprocessor Systems. Chapter 8, 8.1

Lecture 26: Multiprocessors. Today s topics: Directory-based coherence Synchronization Consistency Shared memory vs message-passing

Distributed Operating Systems

Module 7: Synchronization Lecture 13: Introduction to Atomic Primitives. The Lecture Contains: Synchronization. Waiting Algorithms.

Lecture 25: Multiprocessors. Today s topics: Snooping-based cache coherence protocol Directory-based cache coherence protocol Synchronization

Advanced Operating Systems (CS 202) Memory Consistency, Cache Coherence and Synchronization

Chapter 5 Thread-Level Parallelism. Abdullah Muzahid

Introduction. Coherency vs Consistency. Lec-11. Multi-Threading Concepts: Coherency, Consistency, and Synchronization

6.852: Distributed Algorithms Fall, Class 15

CPE 631 Lecture 21: Multiprocessors

Chapter-4 Multiprocessors and Thread-Level Parallelism

Aleksandar Milenkovic, Electrical and Computer Engineering University of Alabama in Huntsville. Review: Small-Scale Shared Memory

1 RCU. 2 Improving spinlock performance. 3 Kernel interface for sleeping locks. 4 Deadlock. 5 Transactions. 6 Scalable interface design

Role of Synchronization. CS 258 Parallel Computer Architecture Lecture 23. Hardware-Software Trade-offs in Synchronization and Data Layout

Lecture: Consistency Models, TM

Readers-Writers Problem. Implementing shared locks. shared locks (continued) Review: Test-and-set spinlock. Review: Test-and-set on alpha

Adaptive Lock. Madhav Iyengar < >, Nathaniel Jeffries < >

Page 1. Lecture 12: Multiprocessor 2: Snooping Protocol, Directory Protocol, Synchronization, Consistency. Bus Snooping Topology

Advanced Topic: Efficient Synchronization

Quiz II Solutions MASSACHUSETTS INSTITUTE OF TECHNOLOGY Fall Department of Electrical Engineering and Computer Science

Cache Coherence and Atomic Operations in Hardware

Review: Multiprocessor. CPE 631 Session 21: Multiprocessors (Part 2) Potential HW Coherency Solutions. Bus Snooping Topology

SHARED-MEMORY COMMUNICATION

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Steve Gribble. Synchronization. Threads cooperate in multithreaded programs

143A: Principles of Operating Systems. Lecture 11: Synchronization. Anton Burtsev November, 2018

CS238P: Operating Systems. Lecture 12: Synchronization. Anton Burtsev March, 2018

CS377P Programming for Performance Multicore Performance Cache Coherence

CS377P Programming for Performance Multicore Performance Synchronization

Advanced Operating Systems (CS 202) Memory Consistency, Cache Coherence and Synchronization

Quiz II Solutions MASSACHUSETTS INSTITUTE OF TECHNOLOGY Fall Department of Electrical Engineering and Computer Science

CS533 Concepts of Operating Systems. Jonathan Walpole

Distributed Operating Systems

Synchronization I q Race condition, critical regions q Locks and concurrent data structures q Next time: Condition variables, semaphores and monitors

Review: Test-and-set spinlock

CSE 120 Principles of Operating Systems

Synchronization Spinlocks - Semaphores

Spin Locks and Contention Management

CSE 451: Operating Systems Winter Lecture 7 Synchronization. Hank Levy 412 Sieg Hall

Lecture 26: Multiprocessors. Today s topics: Synchronization Consistency Shared memory vs message-passing

EN2910A: Advanced Computer Architecture Topic 05: Coherency of Memory Hierarchy

CS5460: Operating Systems

GLocks: Efficient Support for Highly- Contended Locks in Many-Core CMPs

A simple correctness proof of the MCS contention-free lock. Theodore Johnson. Krishna Harathi. University of Florida. Abstract

Symmetric Multiprocessors: Synchronization and Sequential Consistency

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

CPSC/ECE 3220 Fall 2017 Exam Give the definition (note: not the roles) for an operating system as stated in the textbook. (2 pts.

Memory Consistency Models

EN164: Design of Computing Systems Lecture 34: Misc Multi-cores and Multi-processors

l12 handout.txt l12 handout.txt Printed by Michael Walfish Feb 24, 11 12:57 Page 1/5 Feb 24, 11 12:57 Page 2/5

Lecture 26: Multiprocessing continued Computer Architecture and Systems Programming ( )

Today. SMP architecture. SMP architecture. Lecture 26: Multiprocessing continued Computer Architecture and Systems Programming ( )

Synchronization I. Jo, Heeseung

Parallel Computer Architecture Spring Memory Consistency. Nikos Bellas

Lecture 10: Avoiding Locks

Implementing Locks. Nima Honarmand (Based on slides by Prof. Andrea Arpaci-Dusseau)

Locks. Dongkun Shin, SKKU

Achieving Synchronization or How to Build a Semaphore

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Concurrent programming: From theory to practice. Concurrent Algorithms 2015 Vasileios Trigonakis

5008: Computer Architecture

Lecture 10: Cache Coherence: Part I. Parallel Computer Architecture and Programming CMU , Spring 2013

10/17/ Gribble, Lazowska, Levy, Zahorjan 2. 10/17/ Gribble, Lazowska, Levy, Zahorjan 4

Other consistency models

Synchronization I. To do

Agenda. Lecture. Next discussion papers. Bottom-up motivation Shared memory primitives Shared memory synchronization Barriers and locks

Lecture 10: Multi-Object Synchronization

Synchronization. Erik Hagersten Uppsala University Sweden. Components of a Synchronization Even. Need to introduce synchronization.

Transcription:

250P: Computer Systems Architecture Lecture 14: Synchronization Anton Burtsev March, 2019

Coherence and Synchronization Topics: synchronization primitives (Sections 5.4-5.5) 2

Constructing Locks Applications have phases (consisting of many instructions) that must be executed atomically, without other parallel processes modifying the data A lock surrounding the data/code ensures that only one program can be in a critical section at a time The hardware must provide some basic primitives that allow us to construct locks with different properties Lock algorithms assume an underlying cache coherence mechanism when a process updates a lock, other processes will eventually see the update 3

Race conditions Example: Disk driver maintains a list of outstanding requests Each process can add requests to the list

1 struct list { 2 int data; 3 struct list *next; List implementation (no locks) 4 };... 6 struct list *list = 0;... 9 insert(int data) 10 { 11 struct list *l; 12 13 l = malloc(sizeof *l); 14 l->data = data; 15 l->next = list; 16 list = l; 17 } List One data element Pointer to the next element

1 struct list { 2 int data; 3 struct list *next; List implementation (no locks) 4 };... 6 struct list *list = 0;... 9 insert(int data) 10 { 11 struct list *l; 12 13 l = malloc(sizeof *l); 14 l->data = data; 15 l->next = list; 16 list = l; 17 } Global head

1 struct list { 2 int data; 3 struct list *next; List implementation (no locks) 4 };... 6 struct list *list = 0;... 9 insert(int data) 10 { 11 struct list *l; 12 13 l = malloc(sizeof *l); 14 l->data = data; 15 l->next = list; 16 list = l; 17 } Insertion Allocate new list element

1 struct list { 2 int data; 3 struct list *next; List implementation (no locks) 4 };... 6 struct list *list = 0;... 9 insert(int data) 10 { 11 struct list *l; 12 13 l = malloc(sizeof *l); 14 l->data = data; 15 l->next = list; 16 list = l; 17 } Insertion Allocate new list element Save data into that element

1 struct list { 2 int data; 3 struct list *next; List implementation (no locks) 4 };... 6 struct list *list = 0; Insertion Allocate new list element 9 insert(int data) Save data into that element 10 { Insert into the list... 11 struct list *l; 12 13 l = malloc(sizeof *l); 14 l->data = data; 15 l->next = list; 16 list = l; 17 }

Now what happens when two CPUs access the same list

Request queue (e.g. pending disk requests) Linked list, list is pointer to the first element

CPU1 allocates new request

CPU2 allocates new request b

CPUs 1 and 2 update next pointer

CPU1 updates head pointer

CPU2 updates head pointer

State after the race (red element is lost)

Mutual exclusion Only one CPU can update list at a time

1 struct list { 2 int data; 3 struct list *next; List implementation with locks 4 }; 6 struct list *list = 0; struct lock listlock; 9 insert(int data) 10 { 11 struct list *l; 13 l = malloc(sizeof *l); acquire(&listlock); 14 l->data = data; 15 l->next = list; 16 list = l; release(&listlock); 17 } Critical section

How can we implement acquire()?

Spinlock 21 void 22 acquire(struct spinlock *lk) 23 { 24 for(;;) { 25 if(!lk->locked) { 26 lk->locked = 1; 27 break; 28 29 30 } } } Spin until lock is 0 Set it to 1

Still incorrect 21 void 22 acquire(struct spinlock *lk) 23 { 24 for(;;) { Two CPUs can reach line #25 at the same time 25 if(!lk->locked) { 26 lk->locked = 1; See not locked, and 27 break; Acquire the lock 28 29 30 } } } Lines #25 and #26 need to be atomic I.e. indivisible

Synchronization The simplest hardware primitive that greatly facilitates synchronization implementations (locks, barriers, etc.) is an atomic read-modify-write Atomic exchange: swap contents of register and memory Special case of atomic exchange: test & set: transfer memory location into register and write 1 into memory acquire: t&s register, location bnz register, acquire CS release: st location, #0 23

Caching Locks Spin lock: to acquire a lock, a process may enter an infinite loop that keeps attempting a read-modify till it succeeds If the lock is in memory, there is heavy bus traffic other processes make little forward progress Locks can be cached: cache coherence ensures that a lock update is seen by other processors the process that acquires the lock in exclusive state gets to update the lock first spin on a local copy the external bus sees little traffic 24

SMP/UMA/Centralized Memory Multiprocessor Processor Processor Processor Processor Caches Caches Caches Caches Main Memory I/O System 25

Coherence Traffic for a Lock If every process spins on an exchange, every exchange instruction will attempt a write many invalidates and the locked value keeps changing ownership Hence, each process keeps reading the lock value a read does not generate coherence traffic and every process spins on its locally cached copy When the lock owner releases the lock by writing a 0, other copies are invalidated, each spinning process generates a read miss, acquires a new copy, sees the 0, attempts an exchange (requires acquiring the block in exclusive state so the write can happen), first process to acquire the block in exclusive state acquires the lock, others keep spinning 26

Test-and-Test-and-Set lock: test bnz t&s bnz CS st register, location register, lock register, location register, lock location, #0 27

Load-Linked and Store Conditional LL-SC is an implementation of atomic read-modify-write with very high flexibility LL: read a value and update a table indicating you have read this address, then perform any amount of computation SC: attempt to store a result into the same memory location, the store will succeed only if the table indicates that no other process attempted a store since the local LL (success only if the operation was effectively atomic) SC implementations do not generate bus traffic if the SC fails hence, more efficient than test&test&set 28

Spin Lock with Low Coherence Traffic lockit: LL R2, 0(R1) ; load linked, generates no coherence traffic BNEZ R2, lockit ; not available, keep spinning DADDUI R2, R0, #1 ; put value 1 in R2 SC R2, 0(R1) ; store-conditional succeeds if no one ; updated the lock since the last LL BEQZ R2, lockit ; confirm that SC succeeded, else keep trying If there are i processes waiting for the lock, how many bus transactions happen? 29

Spin Lock with Low Coherence Traffic lockit: LL R2, 0(R1) ; load linked, generates no coherence traffic BNEZ R2, lockit ; not available, keep spinning DADDUI R2, R0, #1 ; put value 1 in R2 SC R2, 0(R1) ; store-conditional succeeds if no one ; updated the lock since the last LL BEQZ R2, lockit ; confirm that SC succeeded, else keep trying If there are i processes waiting for the lock, how many bus transactions happen? 1 write by the releaser + i read-miss requests + i responses + 1 write by acquirer + 0 (i-1 failed SCs) + i-1 read-miss requests + i-1 responses 30

Further Reducing Bandwidth Needs Ticket lock: every arriving process atomically picks up a ticket and increments the ticket counter (with an LL-SC), the process then keeps checking the now-serving variable to see if its turn has arrived, after finishing its turn it increments the now-serving variable 31

struct spinlock_t { int current_ticket ; int next_ticket ; Ticket lock in Linux } void spin_lock ( spinlock_t *lock) { int t = atomic_fetch_and_inc (&lock -> next_ticket ); while (t!= lock -> current_ticket ) ; /* spin */ } void spin_unlock ( spinlock_t *lock) { lock -> current_ticket ++; }

What is really wrong with locks? Scalability

48-core AMD server

Exim collapse

Oprofile results

Exim collapse sys_open eventually calls:

Exim collapse sys_open eventually calls: spin_lock and spin_unlock use many more cycles than the critical section

struct spinlock_t { int current_ticket ; int next_ticket ; Ticket lock in Linux } void spin_lock ( spinlock_t *lock) { int t = atomic_fetch_and_inc (&lock -> next_ticket ); while (t!= lock -> current_ticket ) ; /* spin */ } void spin_unlock ( spinlock_t *lock) { lock -> current_ticket ++; }

Spin lock implementation

a ticket Spin lockallocate implementation

a ticket Spin lockallocate implementation

a ticket Spin lockallocate implementation

a ticket Spin lockallocate implementation 120-420 cycles

Update the ticket value Spin lock implementation

Spin lock implementation Bunch of cores are spinning

Spin lock implementation Broadcast message (invalidate the value)

Spin lock implementation Cores don't have the value of current_ticket

Spin lock implementation Re-read the value

Spin lock implementation (120-420) * N/2 cycles

In most architectures, the cache-coherence reads are serialized (either by a shared bus or at the cache line s home or directory node) Thus completing them all takes time proportional to the number of cores. The core that is next in line for the lock can expect to receive its copy of the cache line midway through this process. N/2

Atomic synchronization primitives do not scale well

Atomic increment on 64 cores

What can we do about it?

Is it possible to build scalable spinlocks?

struct qnode { volatile void *next; volatile char locked; }; typedef struct { MCS lock (Mellor-Crummey and M. L. Scott) struct qnode *v; } mcslock_t; arch_mcs_lock(mcslock_t *l, volatile struct qnode *mynode) { struct qnode *predecessor; mynode->next = NULL; predecessor = (struct qnode *)xchg((long *)&l->v, (long)mynode); if (predecessor) { mynode->locked = 1; barrier(); predecessor->next = mynode; while (mynode->locked) ; } }

MCS lock

arch_mcs_lock(mcslock_t *l, volatile struct qnode *mynode) { struct qnode *predecessor; mynode->next = NULL; unlock predecessor = (struct qnode *)xchg((long *)&l->v, (long)mynode); if (predecessor) { mynode->locked = 1; barrier(); predecessor->next = mynode; while (mynode->locked) ; } } arch_mcs_unlock(mcslock_t *l, volatile struct qnode *mynode) { if (!mynode->next) { if (cmpxchg((long *)&l->v, (long)mynode, 0) == (long)mynode) return; while (!mynode->next) ; } ((struct qnode *)mynode->next)->locked = 0; }

Why does this scale?

Ticket spinlock Remember O(N) re-fetch messages after invalidation broadcast

arch_mcs_lock(mcslock_t *l, volatile struct qnode *mynode) { struct qnode *predecessor; mynode->next = NULL; predecessor = (struct qnode *)xchg((long *)&l->v, (long)mynode); if (predecessor) { mynode->locked = 1; barrier(); predecessor->next = mynode; while (mynode->locked) ; One re-fetch message after invalidation } } arch_mcs_unlock(mcslock_t *l, volatile struct qnode *mynode) { if (!mynode->next) { if (cmpxchg((long *)&l->v, (long)mynode, 0) == (long)mynode) return; while (!mynode->next) ; } ((struct qnode *)mynode->next)->locked = 0; }

Cache line isolation struct qnode { volatile void *next; volatile char locked; char pad[0] attribute ((aligned(64))); }; typedef struct { struct qnode *v attribute ((aligned(64))); } mcslock_t;

Exim: MCS vs ticket lock

Thank you!