CS Advanced Operating Systems Structures and Implementation Lecture 6. Parallelism and Synchronization. Goals for Today

Size: px
Start display at page:

Download "CS Advanced Operating Systems Structures and Implementation Lecture 6. Parallelism and Synchronization. Goals for Today"

Transcription

1 Goals for Today CS Advanced Operating Systems Structures and Implementation Lecture 6 Parallelism and Synchronization Multithreading/Posix support for threads Interprocess Communication Synchronization Interactive is important! Ask Questions! February 11 th, 2013 Prof. John Kubiatowicz Note: Some slides and/or pictures in the following are adapted from slides 2013 Lec 6.2 Recall: Process Scheduling Recall: Example of fork( ) int main(int argc, char **argv) { char *name = argv[0]; int child_pid = fork(); if (child_pid == 0) { printf( Child of %s sees PID of %d\n, name, child_pid); return 0; PCBs move from queue to queue as they change state Decisions about which order to remove from queues are Scheduling decisions Many algorithms possible (few weeks from now) else { printf( I am the parent %s. My child is %d\n, name, child_pid); return 0; %./forktest Child of forktest sees PID of 0 I am the parent forktest. My child is 486 Lec 6.3 Lec 6.4

2 Serial Version Process Per Request A (more compelling?) example of fork( ): Web Server int main() { int listen_fd = listen_for_clients(); while (1) { int client_fd = accept(listen_fd); handle_client_request(client_fd); close(client_fd); int main() { int listen_fd = listen_for_clients(); while (1) { int client_fd = accept(listen_fd); if (fork() == 0) { handle_client_request(client_fd); close(client_fd); // Close FD in child when done exit(0); else { close(client_fd); // Close FD in parent // Let exited children rest in peace! while (waitpid(-1,&status,wnohang) > 0); Lec 6.5 Recall: Parent-Child relationship Typical process tree for Solaris system Every thread (and/or Process) has a parentage A parent is a thread that creates another thread A child of a parent was created by that parent Lec 6.6 Multiple Processes Collaborate on a Task Proc 1 Proc 2 Proc 3 (Relatively) High Creation/memory Overhead (Relatively) High Context-Switch Overhead Need Communication mechanism: Separate Address Spaces Isolates Processes Shared-Memory Mapping» Accomplished by mapping addresses to common DRAM» Read and Write through memory Message Passing» send() and receive() messages» Works across network Pipes, Sockets, Signals, Synchronization primitives,. Message queues What are they? Similar to the FIFO pipes, except that a tag (type) is matched when reading/writing» Allowing cutting in line (I am only interested in a particular type of message)» Equivalent to merging of multiple FIFO pipes in one Creating a message queue: int msgget(key_t key, int msgflag); Key can be any large number. But to avoiding using conflicting keys in different programs, use ftok() (the key master).» key_t ftok(const char *path, int id); Path point to a file that the process can stat Id: project ID, only the last 8 bits are used Message queue operations int msgget(key_t, int flag) int msgctl(int msgid, int cmd, struct msgid_ds *buf) int msgsnd(int msgid, const void *ptr, size nbytes, int flag); int msgrcv(int msgid, void *ptr, size_t nbytes, long type, int flag); Performance advantage is no longer there in newer systems (compared with pipe) Lec 6.7 Lec 6.8

3 Code Data Heap Stack Shared Prog 1 Virtual Address Space 1 Shared Memory Communication Data 2 Stack 1 Heap 1 Code 1 Stack 2 Data 1 Heap 2 Code 2 Shared Code Data Heap Stack Shared Prog 2 Virtual Address Space 2 Communication occurs by simply reading/writing to shared address page Really low overhead communication Introduces complex synchronization problems Lec 6.9 Shared Memory Common chunk of read/write memory among processes MAX Shared Memory (unique key) Create ptr Attach 0 Attach ptr Proc. 1 Proc. 2 ptr ptr ptr Proc. 3 Proc. 4 Proc. 5 Lec 6.10 Creating Shared Memory // Create new segment int shmget(key_t key, size_t size, int shmflg); Example: key_t key; int shmid; key = ftok( <somefile>", A'); shmid = shmget(key, 1024, 0644 IPC_CREAT); Special key: IPC_PRIVATE (create new segment) Flags: IPC_CREAT (Create new segment) IPC_EXCL (Fail if segment with key already exists) lower 9 bits permissions use on new segment Attach and Detach Shared Memory // Attach void *shmat(int shmid, void *shmaddr, int shmflg); // Detach int shmdt(void *shmaddr); Example: key_t key; int shmid; char *data; key = ftok("<somefile>", A'); shmid = shmget(key, 1024, 0644); data = shmat(shmid, (void *)0, 0); shmdt(data); Flags: SHM_RDONLY, SHM_REMAP Lec 6.11 Lec 6.12

4 Administrivia In the news: New Apple product rumor: iwatch Is it real? Is it desirable? Needs *REALLY* long battery life Can t require keyboard entry. What might it do? Supposedly running ios? Make sure you update info on Redmine Some of you still have cs194-xx as your name! Update address, etc. Groups posted on Website and on Piazza Problems with infrastructure? Developing FAQ please tell us about problems Design Document What is in Design Document? BDD => No Design Document!? Thread Level Parallelism (TLP) In modern processors, Instruction Level Parallelism (ILP) exploits implicit parallel operations within a loop or straight-line code segment Thread Level Parallelism (TLP) explicitly represented by the use of multiple threads of execution that are inherently parallel Threads can be on a single processor Or, on multiple processors Concurrency vs Parallelism Concurrency is when two tasks can start, run, and complete in overlapping time periods. It doesn't necessarily mean they'll ever both be running at the same instant.» For instance, multitasking on a single-threaded machine. Parallelism is when tasks literally run at the same time, eg. on a multicore processor. Goal: Use multiple instruction streams to improve Throughput of computers that run many programs Execution time of multi-threaded programs Lec 6.13 Lec 6.14 Multiprocessing vs Multiprogramming Remember Definitions: Multiprocessing Multiple CPUs Multiprogramming Multiple Jobs or Processes Multithreading Multiple threads per Process What does it mean to run two threads concurrently? Scheduler is free to run threads in any order and interleaving: FIFO, Random, Dispatcher can choose to run each thread to completion or time-slice in big chunks or small chunks Multiprocessing Multiprogramming A B C A A B C B C A B C B Correctness for systems with concurrent threads If dispatcher can schedule threads in any way, programs must work under all circumstances Can you test for this? How can you know if your program works? Independent Threads: No state shared with other threads Deterministic Input state determines results Reproducible Can recreate Starting Conditions, I/O Scheduling order doesn t matter (if switch() works!!!) Cooperating Threads: Shared State between multiple threads Non-deterministic Non-reproducible Non-deterministic and Non-reproducible means that bugs can be intermittent Sometimes called Heisenbugs Lec 6.15 Lec 6.16

5 Interactions Complicate Debugging Is any program truly independent? Every process shares the file system, OS resources, network, etc Extreme example: buggy device driver causes thread A to crash independent thread B You probably don t realize how much you depend on reproducibility: Example: Evil C compiler» Modifies files behind your back by inserting errors into C program unless you insert debugging code Example: Debugging statements can overrun stack Non-deterministic errors are really difficult to find Example: Memory layout of kernel+user programs» depends on scheduling, which depends on timer/other things» Original UNIX had a bunch of non-deterministic errors Example: Something which does interesting I/O» User typing of letters used to help generate secure keys Why allow cooperating threads? People cooperate; computers help/enhance people s lives, so computers must cooperate By analogy, the non-reproducibility/non-determinism of people is a notable problem for carefully laid plans Advantage 1: Share resources One computer, many users One bank balance, many ATMs» What if ATMs were only updated at night? Embedded systems (robot control: coordinate arm & hand) Advantage 2: Speedup Overlap I/O and computation» Many different file systems do read-ahead Multiprocessors chop up program into parallel pieces Advantage 3: Modularity More important than you might think Chop large problem up into simpler pieces» To compile, for instance, gcc calls cpp cc1 cc2 as ld» Makes system easier to extend Lec 6.17 Lec 6.18 High-level Example: Web Server Server must handle many requests Non-cooperating version: serverloop() { con = AcceptCon(); ProcessFork(ServiceWebPage(),con); What are some disadvantages of this technique? Threaded Web Server Now, use a single process Multithreaded (cooperating) version: serverloop() { connection = AcceptCon(); ThreadFork(ServiceWebPage(),connection); Looks almost the same, but has many advantages: Can share file caches kept in memory, results of CGI scripts, other things Threads are much cheaper to create than processes, so this has a lower per-request overhead Question: would a user-level (say one-to-many) thread package make sense here? When one request blocks on disk, all block What about Denial of Service attacks or digg / Slash-dot effects? Lec 6.19 Lec 6.20

6 Thread Pools Problem with previous version: Unbounded Threads When web-site becomes too popular throughput sinks Instead, allocate a bounded pool of worker threads, representing the maximum level of multiprogramming Master Thread Thread Pool master() { worker(queue) { allocthreads(worker,queue); while(true) { while(true) { con=dequeue(queue); con=acceptcon(); if (con==null) Enqueue(queue,con); sleepon(queue); wakeup(queue); else ServiceWebPage(con); Lec 6.21 queue cobegin/coend cobegin job1(a1); job2(a2); coend fork/join tid1 = fork(job1, a1); job2(a2); join tid1; future v = future(job1(a1)); = v ; forall forall(i from 1 to N) C[I] = A[I] + B[I] end Common Notions of Thread Creation Statements in block may run in parallel cobegins may be nested Scoped, so you cannot have a missing coend Forked procedure runs in parallel Wait at join point if it s not finished Future possibly evaluated in parallel Attempt to use return value will wait Separate thread launched for each iteration Implicit join at end Threads expressed in the code may not turn into independent computations Only create threads if processors idle Example: Thread-stealing runtimes such as cilk Lec 6.22 Overview of POSIX Threads Pthreads: The POSIX threading interface System calls to create and synchronize threads Should be relatively uniform across UNIX-like OS platforms Originally IEEE POSIX c Pthreads contain support for Creating parallelism Synchronizing No explicit support for communication, because shared memory is implicit; a pointer to shared data is passed to a thread» Only for HEAP! Stacks not shared Forking POSIX Threads Signature: int pthread_create(pthread_t *, const pthread_attr_t *, void * (*)(void *), void *); Example call: errcode = pthread_create(&thread_id; &thread_attribute &thread_fun; &fun_arg); thread_id is the thread id or handle (used to halt, etc.) thread_attribute various attributes Standard default values obtained by passing a NULL pointer Sample attribute: minimum stack size thread_fun the function to be run (takes and returns void*) fun_arg an argument can be passed to thread_fun when it starts errorcode will be set nonzero if the create operation fails Lec 6.23 Lec 6.24

7 Simple Threading Example (pthreads) void* SayHello(void *foo) { printf( "Hello, world!\n" ); return NULL; E.g., compile using gcc lpthread int main() { pthread_t threads[16]; int tn; for(tn=0; tn<16; tn++) { pthread_create(&threads[tn], NULL, SayHello, NULL); for(tn=0; tn<16 ; tn++) { pthread_join(threads[tn], NULL); return 0; Shared Data and Threads Variables declared outside of main are shared Objects allocated on the heap may be shared (if pointer is passed) Variables on the stack are private: passing pointer to these around to other threads can cause problems Often done by creating a large thread data struct, which is passed into all threads as argument char *message = "Hello World!\n"; pthread_create(&thread1, NULL, print_fun,(void*) message); Lec 6.25 Lec 6.26 Some More Pthread Functions pthread_yield(); Informs the scheduler that the thread is willing to yield its quantum, requires no arguments. pthread_exit(void *value); Exit thread and pass value to joining thread (if exists) pthread_join(pthread_t *thread, void **result); Wait for specified thread to finish. Place exit value into *result. Others: pthread_t me; me = pthread_self(); Allows a pthread to obtain its own identifier pthread_t thread; pthread_detach(thread); Informs the library that the threads exit status will not be needed by subsequent pthread_join calls resulting in better threads performance. For more information consult the library or the man pages, e.g., man -k pthread.. Lec 6.27 main thread Thread A Thread C Thread E Time Thread B Thread Scheduling Thread D Once created, when will a given thread run? It is up to the Operating System or hardware, but it will run eventually, even if you have more threads than cores But scheduling may be non-ideal for your application Programmer can provide hints or affinity in some cases E.g., create exactly P threads and assign to P cores Can provide user-level scheduling for some systems Application-specific tuning based on programming model Work in the ParLAB on making user-level scheduling easy to do (Lithe, PULSE) Lec 6.28

8 Review: Synchronization problem with Threads One thread per transaction, each running: Deposit(acctId, amount) { acct = GetAccount(actId); /* May use disk I/O */ acct->balance += amount; StoreAccount(acct); /* Involves disk I/O */ Unfortunately, shared state can get corrupted: Thread 1 Thread 2 load r1, acct->balance load r1, acct->balance add r1, amount2 store r1, acct->balance add r1, amount1 store r1, acct->balance Atomic Operation: an operation that always runs to completion or not at all It is indivisible: it cannot be stopped in the middle and state cannot be modified by someone else in the middle Implementation of Locks by Disabling Interrupts Key idea: maintain a lock variable and impose mutual exclusion only during operations on that variable int value = FREE; Acquire() { disable interrupts; if (value == BUSY) { put thread on wait queue; Go to sleep(); // Enable interrupts? else { value = BUSY; enable interrupts; Release() { disable interrupts; if (anyone on wait queue) { take thread off wait queue Place on ready queue; else { value = FREE; enable interrupts; Lec 6.29 Lec 6.30 How to implement locks? Atomic Read-Modify-Write instructions Problem with previous solution? Can t let users disable interrupts! (Why?) Doesn t work well on multiprocessor» Disabling interrupts on all processors requires messages and would be very time consuming Alternative: atomic instruction sequences These instructions read a value from memory and write a new value atomically Hardware is responsible for implementing this correctly» on both uniprocessors (not too hard)» and multiprocessors (requires help from cache coherence protocol) Unlike disabling interrupts, can be used on both uniprocessors and multiprocessors Lec 6.31 Examples of Read-Modify-Write test&set (&address) { /* most architectures */ result = M[address]; M[address] = 1; return result; swap (&address, register) { /* x86 */ temp = M[address]; M[address] = register; register = temp; compare&swap (&address, reg1, reg2) { /* */ if (reg1 == M[address]) { M[address] = reg2; return success; else { return failure; load-linked&store conditional(&address) { /* R4000, alpha */ loop: ll r1, M[address]; movi r2, 1; /* Can do arbitrary comp */ sc r2, M[address]; beqz r2, loop; Lec 6.32

9 Implementing Locks with test&set Another flawed, but simple solution: int value = 0; // Free Acquire() { while (test&set(value)); // while busy Release() { value = 0; Simple explanation: If lock is free, test&set reads 0 and sets value=1, so lock is now busy. It returns 0 so while exits. If lock is busy, test&set reads 1 and sets value=1 (no change). It returns 1, so while loop continues When we set value = 0, someone else can get lock Busy-Waiting: thread consumes cycles while waiting Better Locks using test&set Can we build test&set locks without busy-waiting? Can t entirely, but can minimize! Idea: only busy-wait to atomically check lock value int guard = 0; int value = FREE; Acquire() { // Short busy-wait time while (test&set(guard)); if (value == BUSY) { put thread on wait queue; go to sleep() & guard = 0; else { value = BUSY; guard = 0; Release() { // Short busy-wait time while (test&set(guard)); if anyone on wait queue { take thread off wait queue Place on ready queue; else { value = FREE; guard = 0; Note: sleep has to be sure to reset the guard variable Why can t we do it just before or just after the sleep? Lec 6.33 Lec 6.34 Higher-level Primitives than Locks What is the right abstraction for synchronizing threads that share memory? Want as high a level primitive as possible Good primitives and practices important! Since execution is not entirely sequential, really hard to find bugs, since they happen rarely UNIX is pretty stable now, but up until about mid-80s (10 years after started), systems running UNIX would crash every week or so concurrency bugs Synchronization is a way of coordinating multiple concurrent activities that are using shared state This lecture and the next presents a couple of ways of structuring the sharing Recall: Semaphores Semaphores are a kind of generalized lock First defined by Dijkstra in late 60s Main synchronization primitive used in original UNIX Definition: a Semaphore has a non-negative integer value and supports the following two operations: P(): an atomic operation that waits for semaphore to become positive, then decrements it by 1» Think of this as the wait() operation V(): an atomic operation that increments the semaphore by 1, waking up a waiting P, if any» This of this as the signal() operation Note that P() stands for proberen (to test) and V() stands for verhogen (to increment) in Dutch Lec 6.35 Lec 6.36

10 Semaphores Like Integers Except Semaphores are like integers, except No negative values Only operations allowed are P and V can t read or write value, except to set it initially Operations must be atomic» Two P s together can t decrement value below zero» Similarly, thread going to sleep in P won t miss wakeup from V even if they both happen at same time Semaphore from railway analogy Here is a semaphore initialized to 2 for resource control: Value=1 Value=0 Value=2 Two Uses of Semaphores Mutual Exclusion (initial value = 1) Also called Binary Semaphore. Can be used for mutual exclusion: semaphore.p(); // Critical section goes here semaphore.v(); Scheduling Constraints (initial value = 0) Locks are fine for mutual exclusion, but what if you want a thread to wait for something? Example: suppose you had to implement ThreadJoin which must wait for thread to terminiate: Initial value of semaphore = 0 ThreadJoin { semaphore.p(); ThreadFinish { semaphore.v(); Lec 6.37 Lec 6.38 Monitor with Condition Variables Simple Monitor Example Here is an (infinite) synchronized queue Lock lock; Condition dataready; Queue queue; Lock: the lock provides mutual exclusion to shared data Always acquire before accessing shared data structure Always release after finishing with shared data Lock initially free Condition Variable: a queue of threads waiting for something inside a critical section Key idea: make it possible to go to sleep inside critical section by atomically releasing lock at time we go to sleep Contrast to semaphores: Can t wait inside critical section Lec 6.39 AddToQueue(item) { lock.acquire(); queue.enqueue(item); dataready.signal(); lock.release(); // Get Lock // Add item // Signal any waiters // Release Lock RemoveFromQueue() { lock.acquire(); // Get Lock while (queue.isempty()) { dataready.wait(&lock); // If nothing, sleep item = queue.dequeue(); // Get next item lock.release(); // Release Lock return(item); Lec 6.40

11 Problem: Busy-Waiting for Lock Positives for this solution Machine can receive interrupts User code can use this lock Works on a multiprocessor Negatives This is very inefficient because the busy-waiting thread will consume cycles waiting Waiting thread may take cycles away from thread holding lock (no one wins!) Priority Inversion: If busy-waiting thread has higher priority than thread holding lock no progress! Priority Inversion problem with original Martian rover For semaphores and monitors, waiting thread may wait for an arbitrary length of time! Thus even if busy-waiting was OK for locks, definitely not ok for other primitives Homework/exam solutions should not have busy-waiting! Busy-wait vs Blocking Busy-wait: I.e. spin lock Keep trying to acquire lock until read Very low latency/processor overhead! Very high system overhead!» Causing stress on network while spinning» Processor is not doing anything else useful Blocking: If can t acquire lock, deschedule process (I.e. unload state) Higher latency/processor overhead (1000s of cycles?)» Takes time to unload/restart task» Notification mechanism needed Low system overheadd» No stress on network» Processor does something useful Hybrid: Spin for a while, then block 2-competitive: spin until have waited blocking time Lec 6.41 Lec 6.42 Summary Important concept: Atomic Operations An operation that runs to completion or not at all These are the primitives on which to construct various synchronization primitives Talked about hardware atomicity primitives: Disabling of Interrupts, test&set, swap, comp&swap, load-linked/store conditional Showed several constructions of Locks Must be very careful not to waste/tie up machine resources» Shouldn t disable interrupts for long» Shouldn t spin wait for long Key idea: Separate lock variable, use hardware mechanisms to protect modifications of that variable Talked about Semaphores, Monitors, and Condition Variables Higher level constructs that are harder to screw up Lec 6.43

CS162 Operating Systems and Systems Programming Lecture 7. Mutual Exclusion, Semaphores, Monitors, and Condition Variables

CS162 Operating Systems and Systems Programming Lecture 7. Mutual Exclusion, Semaphores, Monitors, and Condition Variables CS162 Operating Systems and Systems Programming Lecture 7 Mutual Exclusion, Semaphores, Monitors, and Condition Variables September 22, 2010 Prof John Kubiatowicz http://insteecsberkeleyedu/~cs162 Review:

More information

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

Page 1. Goals for Today Atomic Read-Modify-Write instructions Examples of Read-Modify-Write Goals for Today" CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables" Atomic instruction sequence Continue with Synchronization Abstractions Semaphores, Monitors

More information

Page 1. Goals for Today. Atomic Read-Modify-Write instructions. Examples of Read-Modify-Write

Page 1. Goals for Today. Atomic Read-Modify-Write instructions. Examples of Read-Modify-Write Goals for Today CS162 Operating Systems and Systems Programming Lecture 5 Atomic instruction sequence Continue with Synchronization Abstractions Semaphores, Monitors and condition variables Semaphores,

More information

Page 1. Goals for Today" Atomic Read-Modify-Write instructions" Examples of Read-Modify-Write "

Page 1. Goals for Today Atomic Read-Modify-Write instructions Examples of Read-Modify-Write Goals for Today" CS162 Operating Systems and Systems Programming Lecture 5 Semaphores, Conditional Variables" Atomic instruction sequence Continue with Synchronization Abstractions Semaphores, Monitors

More information

CS Advanced Operating Systems Structures and Implementation Lecture 5. Processes. Goals for Today

CS Advanced Operating Systems Structures and Implementation Lecture 5. Processes. Goals for Today Goals for Today CS194-24 Advanced Operating Systems Structures and Implementation Lecture 5 Processes February 11 th, 2013 Prof. John Kubiatowicz http://inst.eecs.berkeley.edu/~cs194-24 Processes Fork/Exec

More information

September 21 st, 2015 Prof. John Kubiatowicz

September 21 st, 2015 Prof. John Kubiatowicz CS162 Operating Systems and Systems Programming Lecture 7 Synchronization September 21 st, 2015 Prof. John Kubiatowicz http://cs162.eecs.berkeley.edu Acknowledgments: Lecture slides are from the Operating

More information

September 23 rd, 2015 Prof. John Kubiatowicz

September 23 rd, 2015 Prof. John Kubiatowicz CS162 Operating Systems and Systems Programming Lecture 8 Locks, Semaphores, Monitors, and Quick Intro to Scheduling September 23 rd, 2015 Prof. John Kubiatowicz http://cs162.eecs.berkeley.edu Acknowledgments:

More information

Page 1. Another Concurrent Program Example" Goals for Today" CS162 Operating Systems and Systems Programming Lecture 4

Page 1. Another Concurrent Program Example Goals for Today CS162 Operating Systems and Systems Programming Lecture 4 CS162 Operating Systems and Systems Programming Lecture 4 Synchronization, Atomic operations, Locks, Semaphores" January 31, 2011! Ion Stoica! http://insteecsberkeleyedu/~cs162! Space Shuttle Example"

More information

CS Advanced Operating Systems Structures and Implementation Lecture 7. How to work in a group / Synchronization Review.

CS Advanced Operating Systems Structures and Implementation Lecture 7. How to work in a group / Synchronization Review. Goals for Today CS194-24 Advanced Operating Systems Structures and Implementation Lecture 7 How to work in a group / Synchronization Review February 20 th, 2013 Prof John Kubiatowicz http://insteecsberkeleyedu/~cs194-24

More information

Shared Memory Programming: Threads and OpenMP. Lecture 6

Shared Memory Programming: Threads and OpenMP. Lecture 6 Shared Memory Programming: Threads and OpenMP Lecture 6 James Demmel www.cs.berkeley.edu/~demmel/cs267_spr16/ CS267 Lecture 6 1 Outline Parallel Programming with Threads Parallel Programming with OpenMP

More information

CS162 Operating Systems and Systems Programming Lecture 7. Synchronization (Continued) Recall: How does Thread get started?

CS162 Operating Systems and Systems Programming Lecture 7. Synchronization (Continued) Recall: How does Thread get started? Recall: How does Thread get started? CS162 Operating Systems and Systems Programming Lecture 7 Synchronization (Continued) Stack growth Other Thread ThreadRoot A B(while) yield run_new_thread New Thread

More information

Page 1. CS162 Operating Systems and Systems Programming Lecture 6. Synchronization. Goals for Today

Page 1. CS162 Operating Systems and Systems Programming Lecture 6. Synchronization. Goals for Today Goals for Today CS162 Operating Systems and Systems Programming Lecture 6 Concurrency examples Need for synchronization Examples of valid synchronization Synchronization February 4, 2010 Ion Stoica http://inst.eecs.berkeley.edu/~cs162

More information

Operating Systems (1DT020 & 1TT802) Lecture 6 Process synchronisation : Hardware support, Semaphores, Monitors, and Condition Variables

Operating Systems (1DT020 & 1TT802) Lecture 6 Process synchronisation : Hardware support, Semaphores, Monitors, and Condition Variables Operating Systems (1DT020 & 1TT802) Lecture 6 Process synchronisation : Hardware support, Semaphores, Monitors, and Condition Variables April 22, 2008 Léon Mugwaneza http://www.it.uu.se/edu/course/homepage/os/vt08

More information

CS162 Operating Systems and Systems Programming Lecture 4. Cooperating Threads. Page 1

CS162 Operating Systems and Systems Programming Lecture 4. Cooperating Threads. Page 1 CS162 Operating Systems and Systems Programming Lecture 4 Cooperating Threads February 4, 2008 Prof. nthony D. Joseph http://inst.eecs.berkeley.edu/~cs162 Review: Per Thread Each Thread has a Thread Control

More information

CS162 Operating Systems and Systems Programming Lecture 5. Cooperating Threads. Review: Per Thread State Each Thread has a Thread Control Block (TCB)

CS162 Operating Systems and Systems Programming Lecture 5. Cooperating Threads. Review: Per Thread State Each Thread has a Thread Control Block (TCB) CS162 Operating Systems and Systems Programming Lecture 5 Cooperating Threads September 15, 2010 Prof. John Kubiatowicz http://inst.eecs.berkeley.edu/~cs162 Review: Per Thread Each Thread has a Thread

More information

CS Advanced Operating Systems Structures and Implementation Lecture 7. Parallelism and Synchronization. Goals for Today

CS Advanced Operating Systems Structures and Implementation Lecture 7. Parallelism and Synchronization. Goals for Today Goals for Today CS194-24 Advanced Operating Systems Structures and Implementation Lecture 7 Parallelism and Synchronization February 12 th, 2014 Prof. John Kubiatowicz http://inst.eecs.berkeley.edu/~cs194-24

More information

Page 1. Challenges" Concurrency" CS162 Operating Systems and Systems Programming Lecture 4. Synchronization, Atomic operations, Locks"

Page 1. Challenges Concurrency CS162 Operating Systems and Systems Programming Lecture 4. Synchronization, Atomic operations, Locks CS162 Operating Systems and Systems Programming Lecture 4 Synchronization, Atomic operations, Locks" January 30, 2012 Anthony D Joseph and Ion Stoica http://insteecsberkeleyedu/~cs162 Space Shuttle Example"

More information

CS162 Operating Systems and Systems Programming Lecture 6. Synchronization. Review: ThreadFork(): Create a New Thread

CS162 Operating Systems and Systems Programming Lecture 6. Synchronization. Review: ThreadFork(): Create a New Thread Review: ThreadFork(): Create a New Thread CS162 Operating Systems and Systems Programming Lecture 6 Synchronization September 20, 2010 Prof. John Kubiatowicz http://inst.eecs.berkeley.edu/~cs162 ThreadFork()

More information

CS162 Operating Systems and Systems Programming Lecture 5. Cooperating Threads. Review: Per Thread State Each Thread has a Thread Control Block (TCB)

CS162 Operating Systems and Systems Programming Lecture 5. Cooperating Threads. Review: Per Thread State Each Thread has a Thread Control Block (TCB) CS162 Operating Systems and Systems Programming Lecture 5 Cooperating Threads September 15, 2008 Prof. John Kubiatowicz http://inst.eecs.berkeley.edu/~cs162 Review: Per Thread Each Thread has a Thread

More information

Parallel Programming with Threads

Parallel Programming with Threads Thread Programming with Shared Memory Parallel Programming with Threads Program is a collection of threads of control. Can be created dynamically, mid-execution, in some languages Each thread has a set

More information

Goals. Processes and Threads. Concurrency Issues. Concurrency. Interlacing Processes. Abstracting a Process

Goals. Processes and Threads. Concurrency Issues. Concurrency. Interlacing Processes. Abstracting a Process Goals Processes and Threads Process vs. Kernel Thread vs. User Green Threads Thread Cooperation Synchronization Implementing Concurrency Concurrency Uniprogramming: Execute one program at a time EX: MS/DOS,

More information

Shared Memory Programming. Parallel Programming Overview

Shared Memory Programming. Parallel Programming Overview Shared Memory Programming Arvind Krishnamurthy Fall 2004 Parallel Programming Overview Basic parallel programming problems: 1. Creating parallelism & managing parallelism Scheduling to guarantee parallelism

More information

Page 1. Why allow cooperating threads?" Threaded Web Server"

Page 1. Why allow cooperating threads? Threaded Web Server Why allow cooperating threads?" CS162 Operating Systems and Systems Programming Lecture 4 Synchronization, Atomic operations, Locks" February 3, 2014! Anthony D Joseph! http://insteecsberkeleyedu/~cs162!

More information

Lecture #7: Implementing Mutual Exclusion

Lecture #7: Implementing Mutual Exclusion Lecture #7: Implementing Mutual Exclusion Review -- 1 min Solution #3 to too much milk works, but it is really unsatisfactory: 1) Really complicated even for this simple example, hard to convince yourself

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 The Process Concept 2 The Process Concept Process a program in execution

More information

Page 1. CS194-3/CS16x Introduction to Systems. Lecture 6. Synchronization primitives, Semaphores, Overview of ACID.

Page 1. CS194-3/CS16x Introduction to Systems. Lecture 6. Synchronization primitives, Semaphores, Overview of ACID. CS194-3/CS16x Introduction to Systems Lecture 6 Synchronization primitives, Semaphores, Overview of ACID September 17, 2007 Prof. Anthony D. Joseph http://www.cs.berkeley.edu/~adj/cs16x Goals for Today

More information

Chapter 5: Threads. Outline

Chapter 5: Threads. Outline Department of Electr rical Eng ineering, Chapter 5: Threads 王振傑 (Chen-Chieh Wang) ccwang@mail.ee.ncku.edu.tw ncku edu Feng-Chia Unive ersity Outline Overview Multithreading Models Threading Issues 2 Depar

More information

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University CS 333 Introduction to Operating Systems Class 3 Threads & Concurrency Jonathan Walpole Computer Science Portland State University 1 Process creation in UNIX All processes have a unique process id getpid(),

More information

CS Advanced Operating Systems Structures and Implementation Lecture 8. Synchronization Continued. Goals for Today. Synchronization Scheduling

CS Advanced Operating Systems Structures and Implementation Lecture 8. Synchronization Continued. Goals for Today. Synchronization Scheduling Goals for Today CS194-24 Advanced Operating Systems Structures and Implementation Lecture 8 Synchronization Continued Synchronization Scheduling Interactive is important! Ask Questions! February 25 th,

More information

Page 1. Goals for Today" Why Processes & Threads?" Putting it together: Process" CS162 Operating Systems and Systems Programming Lecture 3

Page 1. Goals for Today Why Processes & Threads? Putting it together: Process CS162 Operating Systems and Systems Programming Lecture 3 Goals for Today CS162 Operating Systems and Systems Programming Lecture 3 Concurrency and Thread Dispatching September 5, 2012 Ion Stoica http://inst.eecs.berkeley.edu/~cs162 Review: Processes and Threads

More information

Thread. Disclaimer: some slides are adopted from the book authors slides with permission 1

Thread. Disclaimer: some slides are adopted from the book authors slides with permission 1 Thread Disclaimer: some slides are adopted from the book authors slides with permission 1 IPC Shared memory Recap share a memory region between processes read or write to the shared memory region fast

More information

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2004 Lecture 8: Semaphores, Monitors, & Condition Variables 8.0 Main Points: Definition of semaphores Example of use

More information

Last Class: Synchronization

Last Class: Synchronization Last Class: Synchronization Synchronization primitives are required to ensure that only one thread executes in a critical section at a time. Concurrent programs Low-level atomic operations (hardware) load/store

More information

CMPSC 311- Introduction to Systems Programming Module: Concurrency

CMPSC 311- Introduction to Systems Programming Module: Concurrency CMPSC 311- Introduction to Systems Programming Module: Concurrency Professor Patrick McDaniel Fall 2016 Sequential Programming Processing a network connection as it arrives and fulfilling the exchange

More information

CMPSC 311- Introduction to Systems Programming Module: Concurrency

CMPSC 311- Introduction to Systems Programming Module: Concurrency CMPSC 311- Introduction to Systems Programming Module: Concurrency Professor Patrick McDaniel Fall 2013 Sequential Programming Processing a network connection as it arrives and fulfilling the exchange

More information

Inter-process communication (IPC)

Inter-process communication (IPC) Inter-process communication (IPC) Operating Systems Kartik Gopalan References Chapter 5 of OSTEP book. Unix man pages Advanced Programming in Unix Environment by Richard Stevens http://www.kohala.com/start/apue.html

More information

Thread States, Dispatching, Cooperating Threads. November Winter Term 2008/09 Gerd Liefländer

Thread States, Dispatching, Cooperating Threads. November Winter Term 2008/09 Gerd Liefländer System Architecture 7 Thread States, Dispatching Thread States, Dispatching, Cooperating Threads November 17 2008 Winter Term 2008/09 Gerd Liefländer 2008 Universität Karlsruhe (TH), System Architecture

More information

W4118 Operating Systems. Junfeng Yang

W4118 Operating Systems. Junfeng Yang W4118 Operating Systems Junfeng Yang What is a process? Outline Process dispatching Common process operations Inter-process Communication What is a process Program in execution virtual CPU Process: an

More information

Page 1. CS162 Operating Systems and Systems Programming Lecture 8. Readers-Writers Language Support for Synchronization

Page 1. CS162 Operating Systems and Systems Programming Lecture 8. Readers-Writers Language Support for Synchronization Review: Implementation of Locks by Disabling Interrupts CS162 Operating Systems and Systems Programming Lecture 8 Readers-Writers Language Support for Synchronization Friday 11, 2010 Ion Stoica http://insteecsberkeleyedu/~cs162

More information

Concurrency, Thread. Dongkun Shin, SKKU

Concurrency, Thread. Dongkun Shin, SKKU Concurrency, Thread 1 Thread Classic view a single point of execution within a program a single PC where instructions are being fetched from and executed), Multi-threaded program Has more than one point

More information

COMP 3100 Operating Systems

COMP 3100 Operating Systems Programming Interface» A process is an instance of a running program. COMP 3100 Operating Systems» Functionality that an OS provides to applications» Process Management» Input/Output Week 3 Processes and

More information

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2002

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2002 CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring 2002 Lecture 6: Synchronization 6.0 Main points More concurrency examples Synchronization primitives 6.1 A Larger Concurrent

More information

Lecture 4: Memory Management & The Programming Interface

Lecture 4: Memory Management & The Programming Interface CS 422/522 Design & Implementation of Operating Systems Lecture 4: Memory Management & The Programming Interface Zhong Shao Dept. of Computer Science Yale University Acknowledgement: some slides are taken

More information

CSE 153 Design of Operating Systems Fall 2018

CSE 153 Design of Operating Systems Fall 2018 CSE 153 Design of Operating Systems Fall 2018 Lecture 4: Processes (2) Threads Process Creation: Unix In Unix, processes are created using fork() int fork() fork() Creates and initializes a new PCB Creates

More information

CS 261 Fall Mike Lam, Professor. Threads

CS 261 Fall Mike Lam, Professor. Threads CS 261 Fall 2017 Mike Lam, Professor Threads Parallel computing Goal: concurrent or parallel computing Take advantage of multiple hardware units to solve multiple problems simultaneously Motivations: Maintain

More information

CS162 Operating Systems and Systems Programming Lecture 8. Readers-Writers Language Support for Synchronization

CS162 Operating Systems and Systems Programming Lecture 8. Readers-Writers Language Support for Synchronization Review: Implementation of Locks by Disabling Interrupts CS162 Operating Systems and Systems Programming Lecture 8 Readers-Writers Language Support for Synchronization September 27, 2010 Prof John Kubiatowicz

More information

Role of Synchronization. CS 258 Parallel Computer Architecture Lecture 23. Hardware-Software Trade-offs in Synchronization and Data Layout

Role of Synchronization. CS 258 Parallel Computer Architecture Lecture 23. Hardware-Software Trade-offs in Synchronization and Data Layout CS 28 Parallel Computer Architecture Lecture 23 Hardware-Software Trade-offs in Synchronization and Data Layout April 21, 2008 Prof John D. Kubiatowicz http://www.cs.berkeley.edu/~kubitron/cs28 Role of

More information

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved. Processes Prof. James L. Frankel Harvard University Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved. Process Model Each process consists of a sequential program

More information

Page 1. Recap: ATM Bank Server" Recap: Challenge of Threads"

Page 1. Recap: ATM Bank Server Recap: Challenge of Threads Recap: ATM Bank Server" CS162 Operating Systems and Systems Programming Lecture 4 Synchronization, Atomic operations, Locks" February 4, 2013 Anthony D Joseph http://insteecsberkeleyedu/~cs162 ATM server

More information

Today: Synchronization. Recap: Synchronization

Today: Synchronization. Recap: Synchronization Today: Synchronization Synchronization Mutual exclusion Critical sections Example: Too Much Milk Locks Synchronization primitives are required to ensure that only one thread executes in a critical section

More information

CS 550 Operating Systems Spring Inter Process Communication

CS 550 Operating Systems Spring Inter Process Communication CS 550 Operating Systems Spring 2019 Inter Process Communication 1 Question? How processes communicate with each other? 2 Some simple forms of IPC Parent-child Command-line arguments, wait( ), waitpid(

More information

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011

Synchronization. CS61, Lecture 18. Prof. Stephen Chong November 3, 2011 Synchronization CS61, Lecture 18 Prof. Stephen Chong November 3, 2011 Announcements Assignment 5 Tell us your group by Sunday Nov 6 Due Thursday Nov 17 Talks of interest in next two days Towards Predictable,

More information

COSC Operating Systems Design, Fall Lecture Note: Unnamed Pipe and Shared Memory. Unnamed Pipes

COSC Operating Systems Design, Fall Lecture Note: Unnamed Pipe and Shared Memory. Unnamed Pipes COSC4740-01 Operating Systems Design, Fall 2001 Lecture Note: Unnamed Pipe and Shared Memory Unnamed Pipes Pipes are a form of Inter-Process Communication (IPC) implemented on Unix and Linux variants.

More information

CMPSC 311- Introduction to Systems Programming Module: Concurrency

CMPSC 311- Introduction to Systems Programming Module: Concurrency CMPSC 311- Introduction to Systems Programming Module: Concurrency Professor Patrick McDaniel Fall 2013 Sequential Programming Processing a network connection as it arrives and fulfilling the exchange

More information

Part II Processes and Threads Process Basics

Part II Processes and Threads Process Basics Part II Processes and Threads Process Basics Fall 2017 Program testing can be used to show the presence of bugs, but never to show their absence 1 Edsger W. Dijkstra From Compilation to Execution A compiler

More information

Introduction to PThreads and Basic Synchronization

Introduction to PThreads and Basic Synchronization Introduction to PThreads and Basic Synchronization Michael Jantz, Dr. Prasad Kulkarni Dr. Douglas Niehaus EECS 678 Pthreads Introduction Lab 1 Introduction In this lab, we will learn about some basic synchronization

More information

Dealing with Issues for Interprocess Communication

Dealing with Issues for Interprocess Communication Dealing with Issues for Interprocess Communication Ref Section 2.3 Tanenbaum 7.1 Overview Processes frequently need to communicate with other processes. In a shell pipe the o/p of one process is passed

More information

Signaling and Hardware Support

Signaling and Hardware Support Signaling and Hardware Support David E. Culler CS162 Operating Systems and Systems Programming Lecture 12 Sept 26, 2014 Reading: A&D 5-5.6 HW 2 due Proj 1 Design Reviews Mid Term Monday SynchronizaEon

More information

CS510 Operating System Foundations. Jonathan Walpole

CS510 Operating System Foundations. Jonathan Walpole CS510 Operating System Foundations Jonathan Walpole The Process Concept 2 The Process Concept Process a program in execution Program - description of how to perform an activity instructions and static

More information

Concurrent Programming

Concurrent Programming Concurrent Programming CS 485G-006: Systems Programming Lectures 32 33: 18 20 Apr 2016 1 Concurrent Programming is Hard! The human mind tends to be sequential The notion of time is often misleading Thinking

More information

Threads. CS-3013 Operating Systems Hugh C. Lauer. CS-3013, C-Term 2012 Threads 1

Threads. CS-3013 Operating Systems Hugh C. Lauer. CS-3013, C-Term 2012 Threads 1 Threads CS-3013 Operating Systems Hugh C. Lauer (Slides include materials from Slides include materials from Modern Operating Systems, 3 rd ed., by Andrew Tanenbaum and from Operating System Concepts,

More information

CS162 Operating Systems and Systems Programming Midterm Review"

CS162 Operating Systems and Systems Programming Midterm Review CS162 Operating Systems and Systems Programming Midterm Review" March 5, 2012! http://inst.eecs.berkeley.edu/~cs162! Synchronization, Critical section" Midterm Review.2! Definitions" Synchronization: using

More information

CS533 Concepts of Operating Systems. Jonathan Walpole

CS533 Concepts of Operating Systems. Jonathan Walpole CS533 Concepts of Operating Systems Jonathan Walpole Introduction to Threads and Concurrency Why is Concurrency Important? Why study threads and concurrent programming in an OS class? What is a thread?

More information

PROCESS SYNCHRONIZATION READINGS: CHAPTER 5

PROCESS SYNCHRONIZATION READINGS: CHAPTER 5 PROCESS SYNCHRONIZATION READINGS: CHAPTER 5 ISSUES IN COOPERING PROCESSES AND THREADS DATA SHARING Shared Memory Two or more processes share a part of their address space Incorrect results whenever two

More information

Page 1. Review: Execution Stack Example" Review: Execution Stack Example" Review: Execution Stack Example"

Page 1. Review: Execution Stack Example Review: Execution Stack Example Review: Execution Stack Example CS162 Operating Systems and Systems Programming Lecture 3 Concurrency and Thread Dispatching September 7 th, 2011 Anthony D. Joseph and Ion Stoica http://inst.eecs.berkeley.edu/~cs162 addry: addru: addrv:

More information

Multicore and Multiprocessor Systems: Part I

Multicore and Multiprocessor Systems: Part I Chapter 3 Multicore and Multiprocessor Systems: Part I Max Planck Institute Magdeburg Jens Saak, Scientific Computing II 44/337 Symmetric Multiprocessing Definition (Symmetric Multiprocessing (SMP)) The

More information

Prepared by Prof. Hui Jiang (COSC3221) 2/9/2007

Prepared by Prof. Hui Jiang (COSC3221) 2/9/2007 1 * * ' &% $ # " "! 4 ' Prepared by Prof Hui Jiang COSC1 /9/007 / 0 How CPU is used? Users run programs in CPU In a multiprogramming system a CPU always has several jobs to run How to define a CPU job?

More information

CS333 Intro to Operating Systems. Jonathan Walpole

CS333 Intro to Operating Systems. Jonathan Walpole CS333 Intro to Operating Systems Jonathan Walpole Threads & Concurrency 2 Threads Processes have the following components: - an address space - a collection of operating system state - a CPU context or

More information

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio Fall 2017 1 Outline Inter-Process Communication (20) Threads

More information

February 23 rd, 2015 Prof. John Kubiatowicz

February 23 rd, 2015 Prof. John Kubiatowicz CS162 Operating Systems and Systems Programming Lecture 9 Synchronization Continued, Readers/Writers example, Scheduling February 23 rd, 2015 Prof. John Kubiatowicz http://cs162.eecs.berkeley.edu Acknowledgments:

More information

CS162 Operating Systems and Systems Programming Lecture 7. Concurrency (Continued), Synchronization

CS162 Operating Systems and Systems Programming Lecture 7. Concurrency (Continued), Synchronization CS162 Operating Systems and Systems Programming Lecture 7 Concurrency (Continued), Synchronization September 17 th, 2018 Ion Stoica http://cs162.eecs.berkeley.edu Recall: Simultaneous MultiThreading/Hyperthreading

More information

CPSC 341 OS & Networks. Threads. Dr. Yingwu Zhu

CPSC 341 OS & Networks. Threads. Dr. Yingwu Zhu CPSC 341 OS & Networks Threads Dr. Yingwu Zhu Processes Recall that a process includes many things An address space (defining all the code and data pages) OS resources (e.g., open files) and accounting

More information

Threads and Too Much Milk! CS439: Principles of Computer Systems January 31, 2018

Threads and Too Much Milk! CS439: Principles of Computer Systems January 31, 2018 Threads and Too Much Milk! CS439: Principles of Computer Systems January 31, 2018 Last Time CPU Scheduling discussed the possible policies the scheduler may use to choose the next process (or thread!)

More information

Computer Science 322 Operating Systems Mount Holyoke College Spring Topic Notes: Processes and Threads

Computer Science 322 Operating Systems Mount Holyoke College Spring Topic Notes: Processes and Threads Computer Science 322 Operating Systems Mount Holyoke College Spring 2010 Topic Notes: Processes and Threads What is a process? Our text defines it as a program in execution (a good definition). Definitions

More information

Threads and Too Much Milk! CS439: Principles of Computer Systems February 6, 2019

Threads and Too Much Milk! CS439: Principles of Computer Systems February 6, 2019 Threads and Too Much Milk! CS439: Principles of Computer Systems February 6, 2019 Bringing It Together OS has three hats: What are they? Processes help with one? two? three? of those hats OS protects itself

More information

CS Advanced Operating Systems Structures and Implementation Lecture 8. Synchronization Continued. Critical Section.

CS Advanced Operating Systems Structures and Implementation Lecture 8. Synchronization Continued. Critical Section. Goals for Today CS194-24 Advanced Operating Systems Structures and Implementation Lecture 8 Synchronization Continued February 19 th, 2014 Prof. John Kubiatowicz http://inst.eecs.berkeley.edu/~cs194-24

More information

CSE 153 Design of Operating Systems

CSE 153 Design of Operating Systems CSE 153 Design of Operating Systems Winter 19 Lecture 7/8: Synchronization (1) Administrivia How is Lab going? Be prepared with questions for this weeks Lab My impression from TAs is that you are on track

More information

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song

CSCE 313 Introduction to Computer Systems. Instructor: Dezhen Song CSCE 313 Introduction to Computer Systems Instructor: Dezhen Song Programs, Processes, and Threads Programs and Processes Threads Programs, Processes, and Threads Programs and Processes Threads Processes

More information

CS 153 Design of Operating Systems Winter 2016

CS 153 Design of Operating Systems Winter 2016 CS 153 Design of Operating Systems Winter 2016 Lecture 7: Synchronization Administrivia Homework 1 Due today by the end of day Hopefully you have started on project 1 by now? Kernel-level threads (preemptable

More information

POSIX threads CS 241. February 17, Copyright University of Illinois CS 241 Staff

POSIX threads CS 241. February 17, Copyright University of Illinois CS 241 Staff POSIX threads CS 241 February 17, 2012 Copyright University of Illinois CS 241 Staff 1 Recall: Why threads over processes? Creating a new process can be expensive Time A call into the operating system

More information

CSCE 313: Intro to Computer Systems

CSCE 313: Intro to Computer Systems CSCE 313 Introduction to Computer Systems Instructor: Dr. Guofei Gu http://courses.cse.tamu.edu/guofei/csce313/ Programs, Processes, and Threads Programs and Processes Threads 1 Programs, Processes, and

More information

CS510 Operating System Foundations. Jonathan Walpole

CS510 Operating System Foundations. Jonathan Walpole CS510 Operating System Foundations Jonathan Walpole Threads & Concurrency 2 Why Use Threads? Utilize multiple CPU s concurrently Low cost communication via shared memory Overlap computation and blocking

More information

Processes and Threads

Processes and Threads COS 318: Operating Systems Processes and Threads Kai Li and Andy Bavier Computer Science Department Princeton University http://www.cs.princeton.edu/courses/archive/fall13/cos318 Today s Topics u Concurrency

More information

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017

ECE 550D Fundamentals of Computer Systems and Engineering. Fall 2017 ECE 550D Fundamentals of Computer Systems and Engineering Fall 2017 The Operating System (OS) Prof. John Board Duke University Slides are derived from work by Profs. Tyler Bletsch and Andrew Hilton (Duke)

More information

CS 326: Operating Systems. Process Execution. Lecture 5

CS 326: Operating Systems. Process Execution. Lecture 5 CS 326: Operating Systems Process Execution Lecture 5 Today s Schedule Process Creation Threads Limited Direct Execution Basic Scheduling 2/5/18 CS 326: Operating Systems 2 Today s Schedule Process Creation

More information

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019

CS 31: Introduction to Computer Systems : Threads & Synchronization April 16-18, 2019 CS 31: Introduction to Computer Systems 22-23: Threads & Synchronization April 16-18, 2019 Making Programs Run Faster We all like how fast computers are In the old days (1980 s - 2005): Algorithm too slow?

More information

Overview. Administrative. * HW 2 Grades. * HW 3 Due. Topics: * What are Threads? * Motivating Example : Async. Read() * POSIX Threads

Overview. Administrative. * HW 2 Grades. * HW 3 Due. Topics: * What are Threads? * Motivating Example : Async. Read() * POSIX Threads Overview Administrative * HW 2 Grades * HW 3 Due Topics: * What are Threads? * Motivating Example : Async. Read() * POSIX Threads * Basic Thread Management * User vs. Kernel Threads * Thread Attributes

More information

Operating Systems. Sina Meraji U of T

Operating Systems. Sina Meraji U of T Operating Systems Sina Meraji U of T 1 Announcement Check discussion board for announcements A1 is posted 2 Recap: Process Creation: Unix In Unix, processes are created using fork() int fork() fork() Creates

More information

POSIX / System Programming

POSIX / System Programming POSIX / System Programming ECE 650 Methods and Tools for Software Eng. Guest lecture 2017 10 06 Carlos Moreno cmoreno@uwaterloo.ca E5-4111 2 Outline During today's lecture, we'll look at: Some of POSIX

More information

CSE 4/521 Introduction to Operating Systems

CSE 4/521 Introduction to Operating Systems CSE 4/521 Introduction to Operating Systems Lecture 5 Threads (Overview, Multicore Programming, Multithreading Models, Thread Libraries, Implicit Threading, Operating- System Examples) Summer 2018 Overview

More information

CSci 4061 Introduction to Operating Systems. (Threads-POSIX)

CSci 4061 Introduction to Operating Systems. (Threads-POSIX) CSci 4061 Introduction to Operating Systems (Threads-POSIX) How do I program them? General Thread Operations Create/Fork Allocate memory for stack, perform bookkeeping Parent thread creates child threads

More information

Process Management And Synchronization

Process Management And Synchronization Process Management And Synchronization In a single processor multiprogramming system the processor switches between the various jobs until to finish the execution of all jobs. These jobs will share the

More information

Programming with Shared Memory. Nguyễn Quang Hùng

Programming with Shared Memory. Nguyễn Quang Hùng Programming with Shared Memory Nguyễn Quang Hùng Outline Introduction Shared memory multiprocessors Constructs for specifying parallelism Creating concurrent processes Threads Sharing data Creating shared

More information

Multithreaded Programming

Multithreaded Programming Multithreaded Programming The slides do not contain all the information and cannot be treated as a study material for Operating System. Please refer the text book for exams. September 4, 2014 Topics Overview

More information

IT 540 Operating Systems ECE519 Advanced Operating Systems

IT 540 Operating Systems ECE519 Advanced Operating Systems IT 540 Operating Systems ECE519 Advanced Operating Systems Prof. Dr. Hasan Hüseyin BALIK (5 th Week) (Advanced) Operating Systems 5. Concurrency: Mutual Exclusion and Synchronization 5. Outline Principles

More information

Disciplina Sistemas de Computação

Disciplina Sistemas de Computação Aula 09 Disciplina Sistemas de Computação Operating System Roles (recall) OS as a Traffic Cop: Manages all resources Settles conflicting requests for resources Prevent errors and improper use of the computer

More information

Threaded Programming. Lecture 9: Alternatives to OpenMP

Threaded Programming. Lecture 9: Alternatives to OpenMP Threaded Programming Lecture 9: Alternatives to OpenMP What s wrong with OpenMP? OpenMP is designed for programs where you want a fixed number of threads, and you always want the threads to be consuming

More information

Computer Science 322 Operating Systems Mount Holyoke College Spring Topic Notes: Processes and Threads

Computer Science 322 Operating Systems Mount Holyoke College Spring Topic Notes: Processes and Threads Computer Science 322 Operating Systems Mount Holyoke College Spring 2008 Topic Notes: Processes and Threads What is a process? Our text defines it as an abstraction of a running program. Definitions from

More information

CS 3305 Intro to Threads. Lecture 6

CS 3305 Intro to Threads. Lecture 6 CS 3305 Intro to Threads Lecture 6 Introduction Multiple applications run concurrently! This means that there are multiple processes running on a computer Introduction Applications often need to perform

More information

Agenda Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2

Agenda Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Lecture 3: Processes Agenda Process Concept Process Scheduling Operations on Processes Interprocess Communication 3.2 Process in General 3.3 Process Concept Process is an active program in execution; process

More information