CS 3723 Operating Systems: Final Review

Similar documents
CS 3723 Operating Systems: Final Review

CS 3733 Operating Systems

Lecture Outline. CS 5523 Operating Systems: Concurrency and Synchronization. a = 0; b = 0; // Initial state Thread 1. Objectives.

CS 5523 Operating Systems: Midterm II - reivew Instructor: Dr. Tongping Liu Department Computer Science The University of Texas at San Antonio

CS 5523: Operating Systems

CMSC421: Principles of Operating Systems

CS3733: Operating Systems

CSE 4/521 Introduction to Operating Systems

Retrieval Exercises for CS 3733 Operating Systems. Instructor: Dr. Turgay Korkmaz Department Computer Science The University of Texas at San Antonio

Synchronization. CSE 2431: Introduction to Operating Systems Reading: Chapter 5, [OSC] (except Section 5.10)

CSI3131 Final Exam Review

CS370 Operating Systems Midterm Review

Processes The Process Model. Chapter 2 Processes and Threads. Process Termination. Process States (1) Process Hierarchies

Chapter 2 Processes and Threads

Operating Systems. Operating Systems Summer 2017 Sina Meraji U of T

CS350: Final Exam Review

IPC and Unix Special Files

Processes The Process Model. Chapter 2. Processes and Threads. Process Termination. Process Creation

CHAPTER NO - 1 : Introduction:

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS450/550 Operating Systems

Lesson 6: Process Synchronization

LINUX INTERNALS & NETWORKING Weekend Workshop

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING UNIT I

CS 5523 Operating Systems: Network

SNS COLLEGE OF ENGINEERING

Exam Guide COMPSCI 386

CS450/550 Operating Systems

Department of Computer applications. [Part I: Medium Answer Type Questions]

Process Synchronization(2)

Subject: Operating System (BTCOC403) Class: S.Y.B.Tech. (Computer Engineering)

Processes Prof. James L. Frankel Harvard University. Version of 6:16 PM 10-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved.

Midterm Exam. October 20th, Thursday NSC

High-level Synchronization

CLASS: II YEAR / IV SEMESTER CSE SUBJECT CODE AND NAME: CS6401 OPERATING SYSTEMS UNIT I OPERATING SYSTEMS OVERVIEW

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

QUESTION BANK UNIT I

Process Synchronization(2)

CS 333 Introduction to Operating Systems. Class 3 Threads & Concurrency. Jonathan Walpole Computer Science Portland State University

CS370 Operating Systems

Learning Outcomes. Concurrency and Synchronisation. Textbook. Concurrency Example. Inter- Thread and Process Communication. Sections & 2.

Synchronization Classic Problems

EI 338: Computer Systems Engineering (Operating Systems & Computer Architecture)

MC7204 OPERATING SYSTEMS

ECE 650 Systems Programming & Engineering. Spring 2018

Concurrency and Synchronisation

Threads. Threads The Thread Model (1) CSCE 351: Operating System Kernels Witawas Srisa-an Chapter 4-5

Concurrency and Synchronisation

Main Points of the Computer Organization and System Software Module

Recap: Thread. What is it? What does it need (thread private)? What for? How to implement? Independent flow of control. Stack

C09: Process Synchronization

CMSC421: Principles of Operating Systems

Final Exam Preparation Questions

Operating Systems. Lecture 4 - Concurrency and Synchronization. Master of Computer Science PUF - Hồ Chí Minh 2016/2017

CS370 Operating Systems

Synchronization. CS 475, Spring 2018 Concurrent & Distributed Systems

OS Structure. User mode/ kernel mode (Dual-Mode) Memory protection, privileged instructions. Definition, examples, how it works?

Chapter 5: Process Synchronization. Operating System Concepts Essentials 2 nd Edition

Chapter 5: Process Synchronization. Operating System Concepts 9 th Edition

OS Structure. User mode/ kernel mode. System call. Other concepts to know. Memory protection, privileged instructions

CSE 153 Design of Operating Systems

Operating Systems. Review ENCE 360

Operating Systems Comprehensive Exam. Spring Student ID # 3/20/2013

Background. The Critical-Section Problem Synchronisation Hardware Inefficient Spinning Semaphores Semaphore Examples Scheduling.

Operating Systems Comprehensive Exam. Spring Student ID # 3/16/2006

Part I: Communication and Networking

Process Synchronization(2)

Chendu College of Engineering & Technology

Lecture 17: Threads and Scheduling. Thursday, 05 Nov 2009

CS 471 Operating Systems. Yue Cheng. George Mason University Fall 2017

Introduction to PThreads and Basic Synchronization

Overview. CMSC 330: Organization of Programming Languages. Concurrency. Multiprocessors. Processes vs. Threads. Computation Abstractions

COP 4225 Advanced Unix Programming. Synchronization. Chi Zhang

Chapter 2 Processes and Threads. Interprocess Communication Race Conditions

Semaphores. To avoid busy waiting: when a process has to wait, it will be put in a blocked queue of processes waiting for the same event

Semaphores. Semaphores. Semaphore s operations. Semaphores: observations

CS 318 Principles of Operating Systems

Chapter 6: Process Synchronization

Lecture 2 Process Management

CS3733: Operating Systems

MARUTHI SCHOOL OF BANKING (MSB)

Thread. Disclaimer: some slides are adopted from the book authors slides with permission 1

Multitasking / Multithreading system Supports multiple tasks

More Synchronization; Concurrency in Java. CS 475, Spring 2018 Concurrent & Distributed Systems

What are they? How do we represent them? Scheduling Something smaller than a process? Threads Synchronizing and Communicating Classic IPC problems

ENGR 3950U / CSCI 3020U UOIT, Fall 2012 Quiz on Process Synchronization SOLUTIONS

SAMPLE MIDTERM QUESTIONS

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK

Review: Easy Piece 1

Introduction to Real-Time Operating Systems

2 nd Half. Memory management Disk management Network and Security Virtual machine

SMD149 - Operating Systems

Achieving Synchronization or How to Build a Semaphore

Semaphore. Originally called P() and V() wait (S) { while S <= 0 ; // no-op S--; } signal (S) { S++; }

PROCESSES & THREADS. Charles Abzug, Ph.D. Department of Computer Science James Madison University Harrisonburg, VA Charles Abzug

UNIT 2 Basic Concepts of CPU Scheduling. UNIT -02/Lecture 01

Concurrency, Thread. Dongkun Shin, SKKU

CSC Operating Systems Spring Lecture - XII Midterm Review. Tevfik Ko!ar. Louisiana State University. March 4 th, 2008.

Techno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:

Transcription:

CS 3723 Operating Systems: Final Review Instructor: Dr. Tongping Liu Lecture Outline High-level synchronization structure: Monitor Pthread mutex Conditional variables Barrier Threading Issues 1 2 Monitors High-level synchronization construct (implement in different languages) that provided mutual exclusion within the monitor AND the ability to wait for a certain condition to become true monitor monitor-name{ shared variable declarations procedure body P1 ( ) {... procedure body P2 ( ) {... procedure body Pn ( ) {... monitors vs. semaphores A Monitor: Ø An object designed to be accessed across threads Ø Member functions enforce mutual exclusion A Semaphore: Ø A low-level object Ø We can use semaphore to implement a monitor {initialization codes; 3 4 Semaphore vs. Mutex_lock Binary Semaphore and Mutex Lock? Definition and initialization volatile int cnt = 0; sem_t mutex = 1; Entering and Exit CS for (i = 0; i < niters; i++) { Wait(&mutex); cnt++; Signal(&mutex); volatile int cnt = 0; pthread_mutex_t mutex; // Initialize to Unlocked pthread_mutex_init(&mutex, NULL); for (i = 0; i < niters; i++) { pthread_mutex_lock(&mutex); cnt++; pthread_mutex_unlock(&mutex); Binary Semaphore: Ø No ownership Mutex lock Ø Only the owner of a lock can release a lock. Ø Priority inversion safety: potentially promote a task Ø Deletion safety: a task owning a lock can t be deleted. 6 1

Synchronization Operations Safety Ø Locks provide mutual exclusion But, we need more than just mutual exclusion of critical regions Coordination Ø Condition variables provide ordering 7 Condition Variables Special pthread data structure Make it possible/easy to go to sleep Ø Atomically: ü release lock ü put thread on wait queue ü go to sleep Each CV has a queue of waiting threads Do we worry about threads that have been put on the wait queue but have NOT gone to sleep yet? Ø no, because those two actions are atomic Each condition variable associated with one lock Condition Variables Condition Variable vs. Semaphore Wait for 1 event, atomically release lock Ø wait(lock& l, CV& c) ü If queue is empty, wait Atomically releases lock, goes to sleep You must be holding lock! May reacquire lock when awakened (pthreads do) Ø signal(cv& c) ü Insert item in queue Wakes up one waiting thread, if any Ø broadcast(cv& c) ü Wakes up all waiting threads 9 void * consumer_dequeue(){ void * item = NULL; pthread_mutex_lock(&l); while (q.empty()){ pthread_cond_wait(&nempty,&l); item = q.pop_back(); pthread_cond_signal(&nfull); pthread_mutex_unlock(&l); return item; void * consumer_dequeue (void){ void * item = NULL; wait(full); wait(mutex); item = q.pop_back; signal(mutex); signal(empty); return item; void producer_enqueue(void *item){ pthread_mutex_lock(&l); while (q.full()) { pthread_cond_wait(&nfull, &l); q.push_front (item); pthread_cond_signal(&nempty); pthread_mutex_unlock(&l); void producer_enqueue(void *item){ wait(empty); wait(mutex); q.push_front(item); signal(mutex); signal(full); Barrier Lock Problems Thread 1 Epoch0 Epoch1 Epoch2 Race condition Atomicity violation Thread 2 Order violation Thread 3 Deadlock Time 12 2

Issues Related to Conditional Variables signal() before wait() Ø Waiting thread may miss the signal Fail to lock mutex before wait Ø May return error, or not blocking if (!condition) wait(); instead of while (!condition) wait(); Ø Condition may still fail when waken up Ø May lead to arbitrary errors, such as segmentation fault Forgot to unlock mutex after signal/wakeup https://courses.engr.illinois.edu/cs241/sp2014/lecture/22-condition-deadlock.pdf Outline Client-server model Ø Widely used in many applications: ftp, mail, http, ssh Communication models Ø Connectionless vs. connection-oriented Network basics: TCP/IP protocols Connection-oriented Client-Server communication Ø Sockets & usages: socket, bind, listen, accept, connect UICI: A Simple Client/Server Implementation Ø Simplified interface: u_open, u_accept, u_connect 13 14 Client-Server Model Applications with two distinct parts Ø server vs. client Sever: wait for requests from clients Client: makes requests for services from server Examples: Ø Web server vs. clients: http and FireFox, Chrome etc. Ø Mail server vs. clients Ø File server vs. client: nfs Outline Client-server model Ø Widely used in many applications: ftp, mail, http, ssh Communication models Ø Connectionless vs. connection-oriented Network basics: TCP/IP protocols Connection-oriented Client-Server communication Ø Sockets & usages: socket, bind, listen, accept, connect UICI: A Simple Client/Server Implementation Ø Simplified interface: u_open, u_accept, u_connect 15 16 Connection-oriented Client-Server Model Client sets up a connection to server s well-known port number Ø After that, communicate over the private channel Many Clients vs. One Server Clients use the same passive endpoint initially How does server handle each request? sequentially? Server are typically powerful to handle multiple requests Pros and Cons? Pros: Errors can be detected, if no response or overloaded Cons: Initial setup overhead 17 18 3

Handle Concurrent Requests Thread Pools Server can fork a child process to handle each request from different clients Ø Too costly, use threads! Ø Apache, nginx: hybrid multi-process multi-threaded. Why? tasks task queue execution policy thread pool 19 define as Runnable of each Executor object Define in Execute() of the executor class Source: people.sutd.edu.sg/~sunjun/50003/wk10slides.pptx Advantage of Thread Pools Reusing an existing thread; reduce thread creation and teardown costs. No latency associated with thread creation; improves responsiveness. By properly tuning the size of the thread pool, you can have enough threads to keep the processors busy while not having so many that your application runs out of memory or thrashes due to competition among threads for resources Sizing Thread Pools The ideal size for a thread pool depends on the types of tasks and the deployment system Ø If it is too big, performance suffers Ø If it is too small, throughput suffers Heuristics Ø For compute intensive tasks, N+1 threads for a N- processor system Ø For tasks including I/O or other blocking operations, you want a larger pool IP Solves the Routing Problem Decide the route for each packet Ø necessary in MANs and WANs Update knowledge of the network Ø Adaptive/dynamic routing is usually used: traffic patterns, topological changes Routing decision Ø hop-by-hop, with period update and distribution of traffic data, e.g., the distance-vector, dynamic, distributed algorithm CS5523: Operating Systems @ UTSA 23 Programmer's View of TCP/IP Application TCP IP Application UDP Transport layer: TCP vs. UDP TCP (Transmission Control Protocol): connection-oriented stream service UDP(User Datagram Protocol): connectionless datagram service CS5523: Operating Systems @ UTSA 24 4

TCP: Transmission Control Protocol TCP is connection-oriented. Ø 3-way handshake used for connection setup Ø Acknowledge each message (piggyback) (Active) Client Syn Syn + Ack (Passive) Server (Active) Client (Passive) Server Data:N (Data :N+1) Ack Socket Abstraction A socket must be bound to a local port Provide endpoints for communication between processes Socket pair - (local IP address, local port, foreign IP address, foreign port) uniquely identifies a communication channel Ack Connection Setup 3-way handshake Acknowledgement data packets CS5523: Operating Systems @ UTSA 25 CS5523: Operating Systems @ UTSA 26 TCP Socket Primitives Client-Server Using TCP Sockets Primitive Socket Bind Listen Accept Connect Send Recv Close Function Create a new communication endpoint Attach a local address to a socket Announce willingness to accept connections Block caller until a connection request arrives Actively attempt to establish a connection Send some data over the connection Receive some data over the connection Release the connection Server side performs first with following actions Ø Step 1: Create socket Ø Step 2: bind server address & port (# known to clients) Ø Step 3: start listen for a connection request Ø Step 4: accept a connection request à create another socket for private communication with the clients Client: create a socket with (IP/host-name, port#), and then try to connect to the server CS5523: Operating Systems @ UTSA 27 CS5523: Operating Systems @ UTSA 28 Programming with UDP/IP sockets 1. Create the socket 2. Identify the socket (name it) 3. On the server, wait for a message 4. On the client, send a message 5. Send a response back to the client (optional) 6. Close the socket No need to setup the channel Lecture1: introduction Operating System: what is it? Evolution of Computer Systems and OS Concepts Different types/variations of Systems/OS Ø Parallel/distributed/real-time/embedded OS etc. OS as a resource manager Ø How does OS provide service? interrupt/system calls OS Structures and basic components Ø Process/memory/IO device managers OS definition, components and system calls 29 30 5

Lecture2: Programs and Processes Programs, Processes and Threads Process creation and its components States of a process and transitions PCB: Process Control Block Process (program image) in memory Argument Arrays Storage and Linkage Classes Creation and Exit of Processes Lecture3-4: Processes and Scheduling Reviews on process Process queues and scheduling events Different levels of schedulers Preemptive vs. non-preemptive Context switches and dispatcher Performance criteria Ø Fairness, efficiency, waiting time, response time, throughput, and turnaround time; Classical schedulers: FIFO, SFJ, PSFJ, and RR CPU Gantt chart vs. process Gantt charts 31 32 Lecture5: File System Basics of File Systems Directory and Unix File System: inodes UNIX I/O System Calls: open, close, read, write, ioctl File Representations: FDT, SFT, inode table fork and inheritance, Filters and redirection File pointers and buffering Directory operations Links of Files: Hard vs. Symbolic Lecture6: IPC Inter-Process communication (IPC) Pipe and its operations FIFOs: named pipes Ø Allow un-related processes to communicate Ring of communicating processes Ø Steps for Ring Creation with Pipes Ø A Ring of n Processes Other issues in token ring Ø Which process to write/read: token management 33 34 Lecture07: memory management Simple memory management: swap etc. Virtual memory and paging Ø Page table and address translation Ø Translation lookaside buffer (TLB) Multi-level page table Page replacement algorithms and modeling Working set of processes Lecture08: Memory Management Virtual Memory" Demand Paging" Physical Memory Management (Buddy and Slap)" Page Replacement" Memory Mapping (mmap)" Connecting with User Space Memory" 35 6

Lecture11: Threads Motivation and thread basics Ø Resources requirements: thread vs. process Thread implementations Ø User threads: e.g., Pthreads and Java threads Ø Kernel threads: e.g., Linux tasks Ø Map user- and kernel-level threads Ø Lightweight process and scheduler activation Other issues with threads: process creation and signals etc. Threaded programs Ø Thread pool Ø Performance vs. number of threads vs. CPUs and I/Os 37 Lecture12: Synchronization Memory Model of Multithreaded Programs Synchronization for coordinated processes/threads The produce/consumer and bounded buffer problem Critical Sections and solutions requirements Peterson s solution for two processes/threads Synchronization Hardware: atomic instructions Semaphores Ø Two operations (wait and signal) and standard usages Bounded buffer problem with semaphores Ø Initial values and order of operations 38 Lecture13: Thread Synchronization High-level synchronization structure: Monitor Pthread mutex Conditional variables Barrier Threading Issues Lecture14: Network Communication Client-server model Ø Widely used in many applications: ftp, mail, http, ssh Communication models Ø Connectionless vs. connection-oriented Network basics: TCP/IP protocols Connection-oriented Client-Server communication Ø Sockets & usages: socket, bind, listen, accept, connect UICI: A Simple Client/Server Implementation Ø Simplified interface: u_open, u_accept, u_connect 39 40 Overall Process: concept, memory model, scheduling, basic programing (fork(), wait()) File system: file, file pointer, links, FDT, SFT, inode IPC: pipe, fifo, dup2 Memory management: paging, page table, TLB, buddy, page replacement Threads: concept, memory model, difference with processes, threading model, basic programing Synchronization: hardware instruction, semaphore Thread-Based Synchronization: lock, conditional variables, barrier, synchronization issue Network Communication: connection-less and connect-oriented 41 7