Operating System Review Part CMSC 602 Operating Systems Ju Wang, 2003 Fall Virginia Commonwealth University
Review Outline Definition Memory Management Objective Paging Scheme Virtual Memory System and Replacement Algorithm CPU Scheduling Context Switching FCFI, SJF, and Other Schemes Scheduling in Linux Process Synchronization Critical Section Mutual Exclusion Deadlock
Definition of Operating Systems Operating system as an extended machine It hides low level hardware details from user, and provides an abstract view to programmer and end-user Hiding: CPU registers, Physical memory, disk block. Providing: Process, Virtual Address Space, File System. As an resource manager It provides efficient sharing of hardware resources among various tasks From application developer s point of view, the O.S is nothing but a bunch of services system call and a bunch of utility tools Programming languages: assembly, C/C++, FORTRAN... shell tools, system configuration tools...
Major Functions of Operating System
Layer Structure
Definition of Operating Systems Goals as resource manager Efficient utilization of resource How can we maintain 100% usage of CPU time? How to improve the throughput of a disk array disk scheduling. Short response or turn-around time can the system support 10,000 http users? Protection among users and system Accounting of usage QoS guaranteed scheduling of user tasks Real-time system, time critical mission. Media server, large scale video on demand services.
Interrupt Interrupt is an essential mean for O.S to perform its duty Hardware interrupt allow asynchronous operation among CPU and peripheral devices Process scheduling rely on the interrupt from system clock Interrupt must be handled: efficiently to minimize the system overhead, i.e., clock interruption routine is very compacted written in assembly codes correctly to avoid system halt and return to the right interruption point allow re-entry or not? It is the origin of mutual exclusion and many other problems.
DMA
Memory Management Objectives: Support large program High utilization of physical memory Support many programs Concurrently Secure sharing and protection Provide fast access using inexpensive memory Memory hierarchy
Memory Management: Virtual Memory Virtual memory: Physical memory is divided into pages of same size (usually 4 KB), to be assigned to processes on a need-to-use basis. allow programs to be executed without completely in memory Free programmers from concern of memory storage limitations. Virtual address: or logical address, a linear address space used inside the program. Physical address: the address put in the system bus to access the actual memory. Address translation Each virtual memory access actually take 2 bus cycles hardware support Translation Look-aside Buffer (similar to cache)
Memory Management: Paging Scheme
Memory Management: Page Allocation
Memory Management: Page Replacement
Memory Management: Replacement Algorithm Goal: keep working set in memory Random First-in, First-out (FIFO) not necessary perform good may suffer Baledy anomaly Least Recently Used (LRU) algorithm always page out pages that has not been used for the longest period of time an approximation of optimal algorithm
Baledy Anormaly
Memory Management: More Design Issues Minimize the number of page fault, because page fault is expensive (may cost 20k or more CPU cycles) How many pages should be allocated to a process? Equal vs. proportional allocation Global vs. local replacement: Thrashing: excessive amount of page faults. Working-set strategy: use timestamp to trace the usage of pages Page size: how large it should be? Trade-off: Small size reduce the waste of internal segmentation, but increase the size of page table and the number of page faults.
CPU SCHEDULING A Scheduler evaluate the set of processes in the ready list, select one of them, and assign it to a processor for execution. When a CPU scheduling occurs, we also referred it as context switching. The scheduler will Save the run-time context for the current process Load the run-time context of the process which is select to run? Jump to the last interrupted point of the loaded process
CPU SCHEDULING: Two processes, single CPU
CPU SCHEDULING maximize CPU utilization and allow CPU sharing (multiprogramming) Majority of the CPU-bursts last around 10 msec I/O usually take seconds or minutes to finish Performance measurement for scheduling algorithms: CPU utilization ratio, prefer a high ratio Turnaround time, the shorter, the better Average waiting time? Response time? Time sharing in Linux scheduler: a fixed time slice of 200ms to run.
SCHEDULER CLASSIFICATION Preemptive or non-preemptive scheduling criteria Uni-processor or multi-processor scheduling Real-time scheduling
CPU SCHEDULING ALGORITHMS: FIRST-COME, FIRST SERVED It is non-preemptive Starvation-free, but poor performance in term of average waiting time. Average queueing time may be long. What are the average queueing and residence times for this scenario? How do average queueing and residence times depend on ordering of these processes in the queue?
CPU SCHEDULING ALGORITHMS: SHORTEST JOB FIRST Optimal for minimizing average waiting time. Why? Can you prove it? Might result in starvation under certain situation. Two schemes: Non-preemptive: once assign a job to the CPU, this job will not be preempted until it finish. If a new process arrives with a shorter expected CPU burst than the remaining time of the current process, preempt. Due to the uncertain of job execution time, the length of CPU burst of a process is predicated based on previous history.
CPU SCHEDULING ALGORITHMS: SHORTEST JOB FIRST Predicting the time the process will use on its next schedule: t(n + 1) = w.t(n) + (1 w).t (n), here t(n + 1): is time of next burst t(n): is time of current burst. T (n): is average of all previous bursts W : is a weighting factor emphasizing current or previous bursts.
CPU SCHEDULING Algorithms: PRIORITY BASED SCHEDULING Assign each process a priority. Schedule highest priority first. All processes within same priority are FCFS. Priority may be determined by user or by some default mechanism. The system may determine the priority based on memory requirements, time limits, or other resource usage. Starvation occurs if a low priority process never runs. Solution: build aging into a variable priority. Delicate balance between giving favorable response for interactive jobs, but not starving batch jobs.
CPU SCHEDULING Algorithms: PREEMPTIVE ALGORITHMS The currently executing process might be forced to relinquish the CPU when a higher priority process is ready. Can be applied to both Shortest Job First or to Priority scheduling. On time sharing machines, this type of scheme is required because the CPU must be protected from a run-away low priority process. Give short jobs a higher priority perceived response time is thus better. What are average queueing and residence times? Compare with FCFS.
CPU SCHEDULING Algorithms: ROUND ROBIN Processor Sharing among, use a small quantum (10-100 ms) such that each process runs frequently. Use a timer to cause an interrupt after a predetermined time. Preempts if task exceeds its quantum. The preempted process is usually put at the end of ready queue if there are n processes in the ready queue and the time quantum is q, each process get an equal share (1/n) of CPU time, and no process wait more than (n 1.q time units between its run) Performance: q large, FIFO q small, could result in too much context switching, thus low performance.
CPU SCHEDULING Algorithms: MULTI-LEVEL QUEUES Each queue has its scheduling algorithm. Then some other algorithm (perhaps priority based) arbitrates between queues. Can use feedback to move between queues Method is complex but flexible. For example, could separate system processes, interactive, batch, favored, unfavored processes
CPU SCHEDULING EXAMPLE: BSD Unix Scheduling This scheduling policy was implemented in 4.3 BSD: The quantum time is set to 1 second Priority is computed with respect to process type and execution history. The equations governing their behavior are : CPUj(i) = CPUj(i-1)/2 Pj(i) = BASEj + CPU(i-1)/2 + NICEj, where NICEj is a user supplied value. Each second, the priorities are recomputed by the scheduler and a new scheduling decision is made
CPU SCHEDULING Algorithms: MULTIPLE PROCESSOR SCHEDULING: Different rules for homogeneous or heterogeneous processors. Load sharing in the distribution of work, such that all processors have an equal amount to do. Each processor can schedule from a common ready queue ( equal machines ) OR can use a master slave arrangement.
LINUX CPU SCHEDULING: uses a simple priority based scheduling algorithm distinguishes three classes of processes for scheduling purposes Real-time FIFO processes are the highest priority and not preemptable Real-time round robin processes are the same as Real-time FIFO thread except for its preemptibility Normal Timesharing processes have lower priority than the previous two
LINUX CPU SCHEDULING: Each process has scheduling priority and a quantum associated with it quantum is decremented by one as it runs Linux schedules processes via a GOODNESS algorithm, which chooses to run the process with highest goodness The algorithm does not scale well If the number of existing processes is very large, it is inefficient to recompute all dynamic priorities at once