Systems Programming/ C and UNIX

Similar documents
Lecture 18. Log into Linux. Copy two subdirectories in /home/hwang/cs375/lecture18/ $ cp r /home/hwang/cs375/lecture18/*.

CSPP System V IPC 1. System V IPC. Unix Systems Programming CSPP 51081

Unix Inter-process Communication

W4118 Operating Systems. Instructor: Junfeng Yang

Interprocess Communication. Originally multiple approaches Today more standard some differences between distributions still exist

POSIX / System Programming

Using IPC: semaphores Interprocess communication using semaphores. Lecturer: Erick Fredj

COP 4604 UNIX System Programming IPC. Dr. Sam Hsu Computer Science & Engineering Florida Atlantic University

Synchroniza+on II COMS W4118

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Synchronization. Dr. Yingwu Zhu

UNIX Input/Output Buffering

Part II Processes and Threads Process Basics

Pre-lab #2 tutorial. ECE 254 Operating Systems and Systems Programming. May 24, 2012

CS4961 Parallel Programming. Lecture 12: Advanced Synchronization (Pthreads) 10/4/11. Administrative. Mary Hall October 4, 2011

Systems Programming/ C and UNIX

Synchronization and Semaphores. Copyright : University of Illinois CS 241 Staff 1

CS 25200: Systems Programming. Lecture 26: Classic Synchronization Problems

CS 162 Operating Systems and Systems Programming Professor: Anthony D. Joseph Spring Lecture 8: Semaphores, Monitors, & Condition Variables

Semaphores. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

Programs. Program: Set of commands stored in a file Stored on disk Starting a program creates a process static Process: Program loaded in RAM dynamic

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

Process Synchronization. studykorner.org

Solving the Producer Consumer Problem with PThreads

Parallel Programming Languages COMP360

Each terminal window has a process group associated with it this defines the current foreground process group. Keyboard-generated signals are sent to

Synchronization. Semaphores implementation

CS 345 Operating Systems. Tutorial 2: Treasure Room Simulation Threads, Shared Memory, Synchronization

Introduction to PThreads and Basic Synchronization

Mid Term from Feb-2005 to Nov 2012 CS604- Operating System

Lecture 8: September 30

Concurrency: a crash course

PROCESS SYNCHRONIZATION

Unix Processes. What is a Process?

Inter-process communication (IPC)

Synchronization and Semaphores. Copyright : University of Illinois CS 241 Staff 1

CMSC421: Principles of Operating Systems

C09: Process Synchronization

Chapter 6: Process Synchronization

CS 385 Operating Systems Fall 2013 Homework Assignment 2 Inter-Process Communications and Synchronization

Synchronising Threads

Data Races and Deadlocks! (or The Dangers of Threading) CS449 Fall 2017

Most of the work is done in the context of the process rather than handled separately by the kernel

CS370 Operating Systems Midterm Review

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

SYNCHRONIZATION M O D E R N O P E R A T I N G S Y S T E M S R E A D 2. 3 E X C E P T A N D S P R I N G 2018

Interprocess Communication

Interprocess Communication and Synchronization

Process Synchronization

Process Management forks, bombs, zombies, and daemons! Lecture 5, Hands-On Unix System Administration DeCal

INTER-PROCESS COMMUNICATION. UNIX Programming 2015 Fall by Euiseong Seo

CS 385 Operating Systems Spring 2013 Homework Assignment 2 Third Draft Inter-Process Communications and Synchronization

G52CON: Concepts of Concurrency

CSC209: Software tools. Unix files and directories permissions utilities/commands Shell programming quoting wild cards files

CSC209: Software tools. Unix files and directories permissions utilities/commands Shell programming quoting wild cards files. Compiler vs.

CS 105, Spring 2007 Ring Buffer

UNIT III- INTER PROCESS COMMUNICATIONS Part A

CS 470 Spring Mike Lam, Professor. Semaphores and Conditions

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

CS604 - Operating System Solved Subjective Midterm Papers For Midterm Exam Preparation

Operating systems. Lecture 3 Interprocess communication. Narcis ILISEI, 2018

THE PROCESS ABSTRACTION. CS124 Operating Systems Winter , Lecture 7

CSCI 4061: Inter-Process Communication

Programming in Parallel COMP755

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

UNIX IPC. Unix Semaphore Unix Message queue

Roadmap. Tevfik Ko!ar. CSC Operating Systems Fall Lecture - III Processes. Louisiana State University. Processes. September 1 st, 2009

CS Lecture 3! Threads! George Mason University! Spring 2010!

! The Process Control Block (PCB) " is included in the context,

(MCQZ-CS604 Operating Systems)

CSC209 Review. Yeah! We made it!

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

1 Process Coordination

CS 385 Operating Systems Fall 2011 Homework Assignment 5 Process Synchronization and Communications

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

Concurrency. Chapter 5

Operating systems and concurrency (B08)

More Types of Synchronization 11/29/16

CSCI0330 Intro Computer Systems Doeppner. Project Shell 2. Due: November 8, 2017 at 11:59pm. 1 Introduction 2

CS 4410, Fall 2017 Project 1: My First Shell Assigned: August 27, 2017 Due: Monday, September 11:59PM

Chapter 6 Concurrency: Deadlock and Starvation

Multitasking. Programmer s model of multitasking. fork() spawns new process. exit() terminates own process

Last Class: CPU Scheduling! Adjusting Priorities in MLFQ!

ENGR 3950U / CSCI 3020U Midterm Exam SOLUTIONS, Fall 2012 SOLUTIONS

More Shared Memory Programming

Chapter 6: Synchronization. Operating System Concepts 8 th Edition,

Signals. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Advanced Unix Concepts. Satyajit Rai

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Operating Systems. VI. Threads. Eurecom. Processes and Threads Multithreading Models

High-level Synchronization

Concurrency. On multiprocessors, several threads can execute simultaneously, one on each processor.

Signals: Management and Implementation. Sanjiv K. Bhatia Univ. of Missouri St. Louis

Process Synchronization(2)

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

Synchronization Primitives

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

Processes. OS Structure. OS Structure. Modes of Execution. Typical Functions of an OS Kernel. Non-Kernel OS. COMP755 Advanced Operating Systems

Transcription:

Systems Programming/ C and UNIX Alice E. Fischer November 22, 2013 Alice E. Fischer () Systems Programming Lecture 12... 1/27 November 22, 2013 1 / 27

Outline 1 Jobs and Job Control 2 Shared Memory Concepts and Definitions Producer / Consumer Revisited Setting Up the Shared Memory 3 Semaphores Signals in the Producer / Consumer Alice E. Fischer () Systems Programming Lecture 12... 2/27 November 22, 2013 2 / 27

Jobs and Job Control Jobs and Job Control A Job is a Process Job Control Suspension Foreground and Background Recovering from a Mess Alice E. Fischer () Systems Programming Lecture 12... 3/27 November 22, 2013 3 / 27

Jobs and Job Control A Job is a Process. We use the term job to refer to a process that is controlled by a shell. Once started, each job is either in the foreground, running in the background, or suspended. Starting a job the normal way puts it in the foreground: > server2 To start a job in the background, use an ampersand: > server2 & The shell will start the background job, then return immediately to you so that you can start another job that will run concurrently. The output from the background job will still come to the terminal screen. To suspend the foreground process, type Z The suspended job will stop running, but can be restarted. To resume a job after suspension, type ps to find its job number, n then type fg %n or bg %n If you start a job in the background, your shell will control it. Alice E. Fischer () Systems Programming Lecture 12... 4/27 November 22, 2013 4 / 27

Jobs and Job Control Job Control: server2 and the three clients To prepare for this experiment, I used a shell to start the server: > server2 I opened a second shell and listed the jobs it was controlling: > jobs The list was empty. Then I used the same shell to start three clients at the same time, all running in the background > clientm localhost & clientj localhost & client & Typing jobs again for this shell, we see: > jobs [1] + Running clientj localhost [2] - Running clientm localhost [3] Running client localhost Alice E. Fischer () Systems Programming Lecture 12... 5/27 November 22, 2013 5 / 27

Jobs and Job Control Foreground, Background, and Suspension You can only send keyboard input to a foreground job. You can suspend both foreground and background jobs. To move job [2] into the foreground, use: > fg %2 Control-Z suspends the foreground job and returns control to the shell. After suspending clientm we see: > jobs [1] + Running clientj localhost [2] - Suspended clientm localhost [3] Running client localhost To resume work on job [2] in the background or in the foreground use: > bg %2 or > fg %2 Alice E. Fischer () Systems Programming Lecture 12... 6/27 November 22, 2013 6 / 27

Jobs and Job Control Foreground and Background A + in the list of jobs marks the current default job. If a fg or bg command is given without a job number, it affects the default job. A - in the list of jobs marks the prior-current job. You can manage this job using %- Output to stdout from all of the running jobs will come to the shell unless stdout was redirected when the job was started. If the shell is in the foreground, and a background job has produced some output, hit return to get a command prompt. Suppose one of your running background jobs ends with a fatal error and is waiting for you to release it. You must bring it to the foreground, then release it. It will then exit and be gone from the jobs list. Alice E. Fischer () Systems Programming Lecture 12... 7/27 November 22, 2013 7 / 27

Jobs and Job Control Detached Processes We use a shell to run and control processes. Usually, this is a simple matter; but things can get complex. Suppose your shell is running a job in the background, and it is not suspended. When you close the shell window, that terminates the shell and its foreground process. But it does not terminate a running background process. That process lives on as a detached process. If it is a server, new clients can still attach to it. This can create a mess! To see the detached process, use another shell to type ps a To terminate it gracefully, use kill pid or kill -TERM pid To terminate it immediately, use kill -KILL pid. Alice E. Fischer () Systems Programming Lecture 12... 8/27 November 22, 2013 8 / 27

Jobs and Job Control Recovering from a Mess Suppose you have made a mess Several jobs are controlled by your shell, Suppose one of them needs to be stopped because it is out of control and not responding to normal signals. You could stop all of the controlled processes by killing the shell. However, this may be a bad idea, because a process running in the background becomes detached when you close its shell. It is like a daemon! If you only want to stop one job, use the kill command. Kill it this way from the same shell window: > kill -KILL %2 Kill it this way from a different shell window: > ps and note the PID in the left-hand column. > kill -KILL 48283 using that pid. Alice E. Fischer () Systems Programming Lecture 12... 9/27 November 22, 2013 9 / 27

Shared Memory Shared Memory Concepts and Definitions Producer and Consumer revisited Setting up a shared memory area Using the shared memory area Alice E. Fischer () Systems Programming Lecture 12... 10/27 November 22, 2013 10 / 27

Shared Memory Concepts and Definitions Threads vs. Processes Earlier, we looked at a producer/consumer application that was written as one module and implemented with threads. The producers and consumers in that application were able to share memory that was global in the program that spawned the threads. When we fork off child processes, there is no built-in shared memory. The child process gets a copy of the memory of the parent process, but there is no sharing after that. Similarly, there is no easy sharing mechanism for processes that are started up separately. Alice E. Fischer () Systems Programming Lecture 12... 11/27 November 22, 2013 11 / 27

Shared Memory Concepts and Definitions Shared Memory for Processes Unix does provide a way for one process to create a shared memory block to which another process can link. The communicating processes can be created by forking or they can be separately compiled and separately started-up. As with shared thread-memory, processes that share blocks of memory must use mutex locks and synchronization primitives to avoid disaster. The example this week is a single producer with a single consumer, which are separately compiled processes and communicate through a shared memory area that was created by the consumer. Alice E. Fischer () Systems Programming Lecture 12... 12/27 November 22, 2013 12 / 27

Shared Memory Producer / Consumer Revisited Producer / Consumer Revisited producer.c / consumer.c The consumer will be the last one to use or need the shared memory. Therefore, it creates and frees the shared memory block and the semaphores that coordinate it. The shared memory block stores a single variable: a structure that contains a bounded queue, status flags, and the semaphores to regulate access to the queue. Most communication between the producer and the consumer is through the shared memory. In addition, the producer sends one signal to the consumer. The consumer must be started first. The program will run until you stop one of the processes with a CTRL-C. Alice E. Fischer () Systems Programming Lecture 12... 13/27 November 22, 2013 13 / 27

Shared Memory Producer / Consumer Revisited Producer / Consumer Logic The producer calculates data, stores it in the shared queue if slots are available If not, it waits for more slots to appear. Concurrently, the consumer removes letters from the queue and prints them, as long as there are any to remove. Then it waits for more. Both processes output the letters so that you can see that the queue and the communication both work. If the consumer receives a SIGINT, it sets a flag that tells the producer to stop, then it continues its work. The producer s SIGINT handler sets the same flag. If producer sees the flag set (for either reason), it stops producing more output, sets the producerdone flag, and signals the consumer to ensure that the consumer is awake to read the flags. Then it quits. When the consumer sees the producerdone flag, it cleans out the queue, releases the shared resources, and quits. Alice E. Fischer () Systems Programming Lecture 12... 14/27 November 22, 2013 14 / 27

Shared Memory A Shared Memory Block has an ID# Setting Up the Shared Memory This part of the project is easy. Each block of shared memory must be given a unique identifier. A block-id can be any integer; it is quite arbitrary. The same block-id code must be defined by every process that shares the memory block. This is most easily handled by putting the codes in a header file that is included by all parts of the application: #define SHMKEY 123 // Unique ID of shared memory. One process must create the shared memory. Here, we create space for an array of char. 0x1FF is the permission mode (9 perm bits). shmid = shmget(shmkey, BUFSIZE*sizeof(char), IPC_CREAT 0x1FF ); Other processes can then link to the existing shared memory: shmid = shmget(shmkey, BUFSIZE*sizeof(char), 0); Alice E. Fischer () Systems Programming Lecture 12... 15/27 November 22, 2013 15 / 27

Shared Memory Using the Shared Memory: Producer Setting Up the Shared Memory Any process that uses the shared memory must attach it to a local variable name of the correct type, in this case, memoryt*. memoryt* m; m = shmat(shmid, NULL, 0); When both the shared memory and the controlling semaphores have been set up, our producer enters a loop in which it computes items and enqueues them: char ch = rand() % 26 + A ; enqueue(m, ch); enqueue prints debugging information, waits for the two semaphores that control the shared queue, and puts the data into the queue: m->buffer[m->tail] = ch; // Modify the queue. m->tail = (m->tail + 1) % QSIZE; Alice E. Fischer () Systems Programming Lecture 12... 16/27 November 22, 2013 16 / 27

Shared Memory Setting Up the Shared Memory Using the Shared Memory: Consumer The consumer must also map the shared memory into its own address space, then it must initialize all the shared variables: m->consumerpid = getpid(); m->requeststop = m->producerdone = false; m->head = m->tail = 0; m->semid = semid; m->shmid = shmid; Similarly, the consumer executes a production loop that dequeues data until the producer quits: while (!m->producerdone) { printf("consumed: %c\n", dequeue(m)); } dequeue prints debugging information, waits for the two semaphores that control the shared queue, and puts the data into the queue: char ch = m->buffer[m->head]; m->head = (m->head + 1) % QSIZE; Alice E. Fischer () Systems Programming Lecture 12... 17/27 November 22, 2013 17 / 27

Semaphores Semaphores Synchronization of Processes Creating a Semaphore How Counting Semaphores Work Alice E. Fischer () Systems Programming Lecture 12... 18/27 November 22, 2013 18 / 27

Semaphores Synchronization of Processes Whenever two threads or two processes share a memory area, it must be protected by excluding all other processes while one process reads or writes the common area. Implementing mutual exclusion with threads requires setting up a mutex and a semaphore. These can be set up in the shared memory area before the threads are created, and they will automatically have access. Four functions from the pthread package are used to manage the locks: pthread_mutex_lock, pthread_mutex_unlock, pthread_cond_wait(), and pthread_cond_signal(). Failure to use the locks properly could result in several kinds of failure. For processes, semaphores are used for synchronization; they include the functionality of both locks and condition variables, but they work at a more primitive level. In addition, some sort of signaling capability must be implemented, through shared memory, signaling system calls, or both. Alice E. Fischer () Systems Programming Lecture 12... 19/27 November 22, 2013 19 / 27

Semaphores How Counting Semaphores Work The WAIT and POST operations are executed atomically. POST increments a semaphore s counter and wakes up a process that is waiting on it. WAIT tests the counter and blocks the process if it is 0. When some other process increments the counter by executing POST, WAIT decrements the counter and execution proceeds. If the semaphore is > 0 when WAIT is called, WAIT immediately decrements the counter and execution proceeds. Alice E. Fischer () Systems Programming Lecture 12... 20/27 November 22, 2013 20 / 27

Creating a Semaphore Semaphores A Unix semaphore implements a counter that can never go < 0. We use semaphores that count up and down to handle synchronization for the producer / consumer application. We use a true/false semaphore to handle mutual exclusion. Semaphores are implemented in kernel memory, handled by the kernel, and delivered to us through a shared-memory segment its own unique ID code: #define SEMKEY 222 // ID of semaphore memory. In the consumer, we request, or create, 3 semaphores: semid = semget(semkey, 3, IPC_CREAT 0x1FF); The parallel call in the producer attaches to 3 existing semaphores: semid = semget(semkey, 3, 0); Alice E. Fischer () Systems Programming Lecture 12... 21/27 November 22, 2013 21 / 27

Semaphores Shared Memory Needs Coordination To use a semaphore, you must initialize it and define WAIT and POST. We initialize the three semaphores. semctl( semid, ITEMS, SETVAL, 0 ); semctl( semid, SLOTS, SETVAL, QSIZE ); semctl( semid, MUTEX, SETVAL, 1); The function semwait() performs the WAIT semantics. sempost() performs the POST semantics. semid[items] counts the number of items in the shared queue. semid[slots] counts the number of available slots in the queue. The sum of the values in ITEMS and SLOTS will = QSIZE. Alice E. Fischer () Systems Programming Lecture 12... 22/27 November 22, 2013 22 / 27

Semaphores Using WAIT and POST Except for creation and initialization, the producer and consumer use the semaphores in a symmetric manner. The producer will WAIT on semid[slots] before using the shared memory and POST to semid[items] when it finishes. The consumer will WAIT on semid[items] before using the shared memory and POST to semid[slots] when it finishes. In this way, each process will lock and unlock the shared memory, alternately blocking and releasing the other process. The sempost functions are simpler than the semwait functions because they never block. The semwait for the producer is simpler than the semwait for the consumer. Both must check for errors, but if a termination signal comes into the producer, he can just quit. The consumer must empty the queue before quitting. Alice E. Fischer () Systems Programming Lecture 12... 23/27 November 22, 2013 23 / 27

Semaphores The sembuf Array To use a counting semaphore, you must create an array of sembuf objects. Each object in the array represents some action to be performed on a semaphore. All actions are performed atomically. In this program, we use an array with exactly one entry. Here we set up the sembuf array for WAIT; semnum is the # of the semaphore on which we are waiting: static struct sembuf decrement[1]; decrement[0].sem_num = semnum; decrement[0].sem_op = -1; decrement[0].sem_flg = 0; The sem_op tells the semaphore to subtract from its counter each time semwait is called. The sem_flg is a set of flags that can be used to modify the semaphore s behavior. Use 0 for simple protocols. Alice E. Fischer () Systems Programming Lecture 12... 24/27 November 22, 2013 24 / 27

Semaphores Calling semop() To execute a semaphore operation (lock/unlock and count + or -) use semop(). This is the code from the producer s semwait int status = semop(semid, decrement, 1); if (status < 0 ) { if (errno!= EINTR) fatalp("unexpected interrupt in semwait."); } The first parameter is the semaphore ID# that was returned from semget. The second parameter is the sembuf array (actions to perform). The third parameter is the number of actions in the sembuf array. Alice E. Fischer () Systems Programming Lecture 12... 25/27 November 22, 2013 25 / 27

Semaphores Signals in the Producer / Consumer The Shut-Down Sequence This is how the program terminates. The user can signal either process to initiate the shut-down. This causes the requeststop flag to be set in the shared memory. The producer checks requeststop before producing an item. If the flag is on, it sets producerdone to true and wakes up the consumer by sending it a SIGINT. Then it quits. When the consumer sees that producerdone is true, it changes the decrement operation in semwait by setting its flags to IPC_NOWAIT. The semaphore continues to operate and decrement the counter, but it will never again block. Then the consumer continues emptying the queue until nothing is left. Now the shared memory and semaphores are released, and the consumer exits. Alice E. Fischer () Systems Programming Lecture 12... 26/27 November 22, 2013 26 / 27

Semaphores Signals in the Producer / Consumer Summary It is deep and sophisticated, but simple enough to use. It works! Create or attach to the shared memory. Map the handle for the shared memory into your own variable. Create and initialize the shared semaphores. Define functions with arrays of sembuf objects to implement the semaphore-protocol operations. Call SemWait() before using a shared memory resource. Call SemPost() after using a shared memory resource. Be sure to use the proper semaphore-subscript for WAIT and POST. BE SURE TO CHECK the error return codes from all system calls. Alice E. Fischer () Systems Programming Lecture 12... 27/27 November 22, 2013 27 / 27