3.1 (a) The Main Features of Operating Systems
|
|
- Lambert Oswald McKinney
- 6 years ago
- Views:
Transcription
1 Chapter 3.1 The Functions of Operating Systems 3.1 (a) The Main Features of Operating Systems The operating system (OS) must provide and manage hardware resources as well as provide an interface between the user and the machine and between applications software and the machine. The OS must also provide other services such as data security. Originally, if a program needed input, the program would have to contain the code to do this, similarly if output were required. This led to duplication of code so the idea of an OS was born. The OS contained the necessary input and output functions that could be called by an application. Similarly, disk input and output routines were incorporated into the OS. This led to the creation of subroutines to do these simple tasks, such as read a character from a keyboard or send a character to a printer. The joining together of all these basic input and output routines led to the input-output control system (IOCS). Originally, the IOCS could only read a punched card or send data to a card punch. However, as new input and output media, such as magnetic tape and disk, were developed the IOCS became more complex. Another complication was added when assembly and high-level languages were developed as the machine did not use these languages. Machines use binary codes for very simple instructions. With the development of these new types of programming language a computer would have to load an assembler, compiler or interpreter, load the assembly or high-level program, do the translation, store the results somewhere in memory, execute the program, read input data, output the results. The system had now become too complex for the human user to be able to organise it all. Also, as the processor could work much faster than the manual operator and the input and output devices, much time was wasted. Further, to make full use of the processor, more than one program should be stored in memory and the processor should give time to each of the programs. Suppose two programs are stored in memory and, if one is using an input or output device (both very slow compared to the processor), it makes sense for the other program to use the processor. In fact this can be extended to more than two programs as shown in Fig. 3.1.a.1. The OS must now manage the memory so that all three programs shown in Fig. 3.1.a.1 are kept separate as well as any data that they use. It must also schedule the jobs into a sequence that makes best use of the processor. Page 1 of 25
2 Program A Using processor I/O required Using processor Program B Using processor I/O required Using processor Program C Using processor I/O required Processor Processor in use Processor in use Processor idle Fig. 3.1.a.1 The I/O phase should not hold up the processor too much which can easily happen if the I/O devices are very slow, like a keyboard or printer. This can be overcome by using Simultaneous Peripheral Operations On-Line (spooling). The idea is to store all input and output on a high-speed device such as a disk. Fig. 3.1.a.2 shows how this may be achieved, Input device Input spool Output spool Output device Read process Application program Write process Fig. 3.1.a.2 Another problem is that programs may not be loaded into the same memory locations each time they are loaded. For example, suppose that three programs are loaded in the order A, B, C on one occasion and in the order C, A, B on another occasion. The results are shown in Fig. 3.1.a.3. Page 2 of 25
3 OS Program A OS Program C Program B Program C Program A Program B Free Free Fig. 3.1.a.3 A further problem occurs if two or more users wish to use the same program at the same time. For example, suppose user X and user Y both wish to use a compiler for C++ at the same time. Clearly it is a waste of memory if two copies of the compiler have to be loaded into main memory at the same time. It would make much more sense if user X's program and user Y's program are stored in main memory together with a single copy of the compiler as shown in Fig. 3.1.a.4. OS User X's program and data User Y's Program and data Compiler Free Fig. 3.1.a.4 Now the two users can use the compiler in turns and will want to use different parts of the compiler. Also note that there are two different sets of data for the compiler, user X's program and user Y's program. These two sets of data and the outputs from the Page 3 of 25
4 compiler for the two programs must be kept separate. Programs such as this compiler, working in the way described, are called re-entrant. Memory management, scheduling and spooling are described in more detail in the following Sections. Distributed systems have operating systems that arrange for the sharing of the resources of the system by the users of that system. No action is required from the user. An example would be when a user asks for a particular resource in a network system, the resource would be allocated by the O.S. in a way that is transparent to the user. Page 4 of 25
5 3.1 (b) Interrupts The simplest way of obeying instructions is shown in Fig. 3.1.b.1. Start Fetch instruction Execute instruction Yes Any more instructions? No End Fig. 3.1.b.1 This is satisfactory so long as nothing goes wrong. Unfortunately things do go wrong and sometimes the normal order of operation needs to be changed. For example, a user has used up all the time allocated to his use of the processor. This change in order is instigated by messages to the processor, called interrupts. There are a number of different types of interrupt. The nature of each of these types of interrupt is I/O interrupt o Generated by an I/O device to signal that a job is complete or an error has occurred. E.g. printer is out of paper or is not connected. Timer interrupt o Generated by an internal clock indicating that the processor must attend to time critical activities (see scheduling later). Hardware error o For example, power failure which indicates that the OS must close down as safely as possible. Program interrupt o Generated due to an error in a program such as violation of memory use (trying to use part of the memory reserved by the OS for other use) or an attempt to execute an invalid instruction (such as division by zero). Page 5 of 25
6 If the OS is to manage interrupts, the sequence in Fig. 3.1.b.1 needs to be modified as shown in Fig. 3.1.b.2. Start Fetch instruction Execute instruction No Is there an interrupt? Yes Service the interrupt Yes Any more instructions? No End Fig. 3.1.b.2 This diagram shows that, after the execution of an instruction, the OS must see if an interrupt has occurred. If one has occurred, the OS must service the interrupt if it is more important than the task already being carried out (see priorities later). This involves obeying a new set of instructions. The real problem is 'how can the OS arrange for the interrupted program to resume from exactly where it left off?' In order to do this the contents of all the registers in the processor must be saved so that the OS can use them to service the interrupt. Chapter 3.3 explains registers that have to have their contents stored as well as explaining the processing cycle in more detail. Another problem the OS has to deal with happens if an interrupt occurs while another interrupt is being serviced. There are several ways of dealing with this but the simplest is to place the interrupts in a queue and only allow return to the originally interrupted program when the queue is empty. Alternative systems are explained in Section 3.1.c. Taking the simplest case, the order of processing is shown in Fig.3.1.b.3. Page 6 of 25
7 Start Fetch instruction Yes Execute instruction Any more instructions? No No Is there an interrupt in the interrupt queue? Yes End Service the next interrupt in the interrupt queue Fig. 3.1.b.3 The queue of interrupts is the normal first in first out (FIFO) queue and holds indicators to the next interrupt that needs to be serviced. Page 7 of 25
8 3.1 (c) Scheduling The earliest operating systems were used to control single-user computer systems. In those days, the operating system would read in one job, find the data and devices the job needed, let the job run to completion, and then read in another job. In contrast, the computer systems that modern OS manages are capable of multiprogramming, or executing many programs concurrently. With multiprogramming, when a job cannot use the processor, the system can suspend, or interrupt, the job, freeing the processor to work on another job. OS makes multiprogramming possible by capturing and saving all the relevant information about the interrupted program before allowing another program to execute. When the interrupted program is ready to begin executing again, it can resume execution just where it left off. Multiprogramming allows OS to run thousands of programs simultaneously for users who might be working on different projects at different physical locations around the world. So, one of the tasks of the OS is to arrange the jobs that need to be done into an appropriate order. The order may be chosen to ensure that maximum use is made of the processor; another order may make one job more important than another. In the latter case the OS makes use of priorities. Suppose the processor is required by program A, which is printing wage slips for the employees of a large company, and by program B, which is analysing the annual, world-wide sales of the company which has a turnover of many millions of pounds. Program A makes little use of the processor and is said to be I/O bound. Program B makes a great deal of use of the processor and is said to be processor bound. If program B has priority over program A for use of the processor, it could be a long time before program A can print any wage slips. This is shown in Fig. 3.1.c.1. B using processor I/O B using processor A waiting for processor I/O A waiting for processor A using processor Fig. 3.1.c.1 Page 8 of 25
9 Fig. 3.1.c.2 shows what happens if A is given priority over B for use of the processor. This shows that the I/O bound program can still run in a reasonable time and much better throughput is achieved. B using processor B using processor B using processor I/O B using processor I/O I/O I/O I/O I/O A using processor A using processor A using processor A using processor The objectives of scheduling are to Fig. 3.1.c.2 maximise the use of the whole of the computer system; be fair to all users; provide a reasonable response time to all users, whether they are on-line users or a batch processing user; prevent the system failing if it is becoming overloaded; make sure that the system is consistent by always giving similar response times to similar activities from day to day. To achieve these objectives some criteria are needed in order to determine the order in which jobs are executed. The following is a list of criteria which may be used to determine a schedule which will achieve the above objectives. Priority. Give some jobs a greater priority than others when deciding which job should be given access to the processor. I/O or processor bound. If a processor bound job is given the main access to the processor it could prevent the I/O devices being serviced efficiently. Type of job. Batch processing, on-line and real-time jobs all require different response times. Resource requirements. The amount of time needed to complete the job, the memory required, I/O and processor time. Resources used so far. The amount of processor time used so far, how much I/O used so far. Waiting time. The time the job has been waiting to use the system. In order to understand how scheduling is accomplished it is important to realise that any job may be in one, and only one, of three states. A job may be ready to start, running on the system or blocked because it is waiting for a peripheral, for example. Fig. 3.1.c.3 shows how jobs may be moved from one state to another. Note that a job can only enter the running state from the ready state. The ready and blocked states Page 9 of 25
10 are queues that may hold several jobs. On a standard single processor computer only one job can be in the running state. Also, all jobs entering the system normally enter via the ready state and (normally) only leave the system from the running state. Enter the system READY BLOCKED Leave the system Fig. 3.1.c.3 When entering the system a job is placed in the ready queue by a part of the OS called the High Level Scheduler (HLS). The HLS makes sure that the system is not over loaded. Sometimes it is necessary to swap jobs between the main memory and backing store (see Memory Management in Section 3.1.d. This is done by the Medium Level Scheduler (MLS). Moving jobs in and out of the ready state is done by the Low Level Scheduler (LLS). The LLS decides the order in which jobs are to be placed in the running state. There are many policies that may be used to do scheduling, but they can all be placed in one of two classes. These are pre-emptive and non-pre-emptive policies. A pre-emptive scheme allows the LLS to remove a job from the running state so that another job can be placed in the running state. In a non-pre-emptive scheme each job runs until it no longer requires the processor. This may be because it has finished or because it needs an I/O device. Some common scheduling policies are RUNNING First Come First Served (FCFS) Shortest Job First (SJF) Round Robin (RR) Shortest Remaining Time (SRT) Multi-level Feedback Queues (MFQ) and there are many more. Page 10 of 25
11 FCFS o simply means that the first job to enter the ready queue is the first to enter the running state. This favours long jobs. SJF o simply means sort jobs in the ready queue in ascending order of time expected to be needed by each job. New jobs are added to the queue in such a way as to preserve this order. RR o this gives each job a maximum length of processor time (called a time slice) after which the job is put at the back of the ready queue and the job at the front of the queue is given use of the processor. If a job is completed before the maximum time is up it leaves the system. SRT o the ready queue is sorted on the amount of expected time still required by a job. This scheme favours short jobs even more than SJF. Also there is a danger of long jobs being prevented from running. MFQ o involves several queues of different priorities with jobs migrating downwards. For SJF the short jobs are processed first depending on the time required. They will be sorted in this order once. While SRT will mean that once an interrupt is raised the queue will be reshuffled again with the shortest job first. In this way SRT favours short jobs more than SJF. There are other ways of allocating priorities. Safety critical jobs will be given very high priority, on-line and real time applications will also have to have high priorities. For example, a computer monitoring the temperature and pressure in a chemical process whilst analysing results of readings taken over a period of time must give the high priority to the control program. If the temperature or pressure goes out of a predefined range, the control program must take over immediately. Similarly, if a bank's computer is printing bank statements over night and someone wishes to use a cash point, the cash point job must take priority. This scheme is shown in Fig. 3.1.c.4; this shows that queues are needed for jobs with the same priority. Priority 1 (high) Priority 2 Priority 3 Priority 4 (low) Queue Queue Queue Queue Fig. 3.1.c.4 P r o c e s s o r Page 11 of 25
12 In this scheme, any job can only be given use of the processor if all the jobs at higher levels have been completed. Also, if a job enters a queue that has a higher priority than the queue from which the running program has come, the running program is placed back in the queue from which it came and the job that has entered the higher priority queue is placed in the running state. Multi-level feedback queues work in a similar way except that each job is given a maximum length of processor time. When this time is up, and the job is not completely finished, the job is placed in the queue which has the next lower priority level. At the lowest level, instead of a first in first out queue a round robin system is used. This is shown in Fig. 3.1.c.5. Priority 1 (high) Time up Priority 2 Priority 3 Priority 4 (low) Queue Queue Queue RR Queue P r o c e s s o r Glossary of Terms Fig. 3.1.c.5 Active Set: The jobs being processed by a system at a given time. Control Unit: The device that allows jobs to enter the system from the memory queue. CPU Utilization: The percentage of time the CPU is busy and not waiting for peripheral devices to complete processing a job. Degree of Multiprogramming/ Multiprogramming Level (MPL)/ Load: The number of active processes in a system. Memory Queue: Jobs are held in the memory queue until resources in the system are free to serve the job. Response Time: The amount of time a job is at a server, including the service time. Server: Also referred to as a device. A component of a system that serves a job. Service Time: The amount of time a server takes to complete work on a task. Throughput: The number of jobs completed by the system or by a server per unit time. Visit Ratio: The number of times a job is serviced by a server before exiting the system. Page 12 of 25
13 3.1 (d) Memory Management This section can become very complex. In an examination the questions will be limited to basic definitions and explanations. Calculations of the addresses and other detail will not be required; they are included here for completeness of the topic for those students who wish to understand in more detail. In order for a job to be able to use the processor the job must be stored in the computer's main memory. If there are several jobs to be stored, they, and their data, must be protected from the actions of other jobs. Suppose jobs A, B, C and D require 50k, 20k, 10k and 30k of memory respectively and the computer has a total of 130k available for jobs. (Remember the OS will require some memory.) Fig. 3.1.d.1 shows one possible arrangement of the jobs. Free 20k Job D 30k Job C 10k Job B 20k 130k available for jobs Job A 50k OS Page 13 of 25 Fig 3.1.d.1 Now suppose job C terminates and job E, requiring 25k of memory, is next in the ready queue. Clearly job E cannot be loaded into the space that job C has relinquished. However, there is 20k + 10 k = 30k of memory free in total. So the OS must find some way of using it. One solution to the problem would be to move job D up to job B. This would make heavy use of the processor as not only must all the instructions be moved but all addresses used in the instructions would have to be recalculated because all the addresses will have changed.
14 When jobs are loaded into memory, they may not always occupy the same locations. Supposing, instead of jobs A, B, C and D being needed and loaded in that order, it is required to load jobs A, B, D and E in that order. Now job D occupies different locations in memory to those shown above. So again there is a problem of using different addresses. The OS has the task of both loading the jobs and adjusting the addresses. The part of the OS which carries out these tasks is a program called the loader. The calculation of addresses can be done by recalculating each address used in the instructions once the address of the first instruction is known. Alternatively, relative addressing can be used. That is, addresses are specified relative to the first instruction. Another problem to be solved is when to move jobs. Possible solutions are whenever a job terminates; when a new job is too large for any existing space; at regular intervals; when the user decides. This system is known as variable partitioning with compaction. Imagine that each job needs a space to fit into, this space is the partition. Each of the jobs requires a different size of space, hence variable partitions. These variable partitions are normally called segments and the method of dividing memory up is called segmentation. We also saw that sometimes it is necessary to move jobs around so that they fill the holes left by jobs that leave, this is called compaction. An alternative method is to divide both the memory and the jobs into fixed size units called pages. As an example, suppose jobs A, B, C, D and E consist of 6, 4, 1, 3 and 2 pages respectively. Also suppose that the available memory for jobs consists of 12 pages and jobs A, B and C have been loaded into memory as shown in Fig. 3.1.d.2. Job A Job B Job C Memory Page 6 Page 4 Page 1 Free Page 5 Page 3 C1 Page 4 Page 2 B4 Page 3 Page 1 B3 Page 2 Page 1 Fig. 3.1.d.2 B2 B1 A6 A5 A4 A3 A2 A1 Page 14 of 25
15 Now suppose job B terminates, releasing four pages, and jobs D and E are ready to be loaded. Clearly we have a similar problem to that caused by segmentation. The 'hole' consists of four pages into which job D (three pages) will fit, leaving one page plus the original one page of free memory. E consists of two pages, so there is enough memory for E but the pages are not contiguous, in other words they are not joined together and we have the situation shown in Fig. 3.1.d.3. Job E Page 2 Page 1 Memory Free C1 Free D3 D2 D1 A6 A5 A4 A3 A2 A1 Fig. 3.1.d.3 The big difference between partitioning and paging is that jobs do not have to occupy contiguous pages. Thus the solution is shown in Fig. 3.1.d.4. Memory E2 C1 E1 D3 D2 D1 A6 A5 A4 A3 A2 A1 E split Fig. 3.1.d.4 The problem with paging is again address allocation. This can be overcome by keeping a table that shows which memory pages are used for the job pages. Then, if each address used in a job consists of a page number and the distance the required location is from the start of the page, a suitable conversion is possible. Page 15 of 25
16 Suppose, in job A, an instruction refers to a location that is on page 5 and is 46 locations from the start of page 5. This may be represented by Now suppose we have the following table 5 46 Job Page Memory Page A1 4 A2 5 A3 6 A4 7 A5 8 A6 9 We see that page A5 is stored in page 8 of memory, thus 5 46 Becomes 8 46 Paging uses fixed length blocks of memory. An alternative is to use variable length blocks. This method is called segmentation. In segmentation, programmers divide jobs into segments, possibly of different sizes. Usually, the segments would consist of data, or sub-routines or groups of related sub-routines. Since segments may be of different lengths, address calculation has to be carefully checked. The segment table must not only contain the start position of each segment but also the size of each segment. This is needed to ensure that an address does not go out of range. Fig. 3.1.d.5 shows how two jobs may be stored in memory. In this case the programmer split Job A into 4 segments and Job B into 3 segments. These two jobs, when loaded into memory, took up the positions shown in the Figure. Page 16 of 25
17 Job A Job B Memory Segment A4 Free Segment B3 A4 Segment A3 Segment B2 Segment A2 A3 Segment A1 Segment B1 B1 B3 A2 B2 A1 Fig. 3.1.d.5 Page 17 of 25
18 Now suppose that an instruction specifies an address as segment 3, displacement (from start of segment) 132. The OS will look up, in the process segment table, the basic address (in memory) of segment 3. The OS checks that the displacement is not greater than the segment size. If it is, an error is reported. Otherwise the displacement is added to the base address to produce the actual address in memory to be used. The algorithm for this process is 1. Get segment number. 2. Get displacement. 3. Use segment number to find length of segment from segment table. 4. If displacement is greater than segment size, 4.1 produce error message 4.2 stop. 5. Use segment number to find base address of segment from segment table. 6. Add displacement to base address to produce physical address. This is also shown in Fig. 3.1.d.6. Segment Number Displacement Physical Address Segment Table Seg. No. Seg. Size Base Address Fig. 3.1.d.6 Paging and segmentation lead to another important technique called virtual memory. We have seen that jobs can be loaded into memory when they are needed using a paging technique. When a program is running, only those pages that contain code that is needed need be loaded. For example, suppose a word processor has been written with page 1 containing the instructions to allow users to enter text and to alter the text. Also suppose that page 2 contains the code for formatting characters, page 3 the code for formatting paragraphs and page 4 contains the code for cutting, copying and pasting. To run this word processor only page 1 needs to be loaded initially. If the user then wants to format some characters so that they are in bold, then page 2 will have to be loaded. Similarly, if the user wishes to copy and paste a piece of text, page 4 will have to be loaded. When other facilities are needed, the appropriate page can be loaded. If the user now wishes to format some more characters, the OS does not need to load page 2 again as it is already loaded. + Page 18 of 25
19 Now, what happens if there is insufficient space for the new page to be loaded? As only the page containing active instructions need to be loaded, the new page can overwrite a page that is not currently being used. For example, suppose the user wishes to use paragraph formatting; then the OS can load page 3 into the memory currently occupied by page 2. Clearly, this means that programs can be written and used that are larger than the available memory. There must be some system that decides which pages to overwrite. There are many systems such as overwrite the page that has not been used for the longest period of time, replace the page that has not recently been used or the first in first out method. All of these create extra work for the OS. To further complicate matters not every page can be overwritten. Some pages contain a job's data that will change during the running of a program. To keep track of this the OS keeps a flag for each page that can be initially set to zero. If the content of the page changes, the flag can be set to 1. Now, before overwriting a page, the OS can see if that page has been changed. If it has, then the OS will save the page before loading a new page in its place. The OS now has to both load and save pages. If the memory is very full, this loading and saving can use up a great deal of time and can mean that most of the processor's time is involved in swapping pages. This situation is called disk threshing. Systems can use both multi-programming and virtual memory. Also, virtual memory can use segmentation as well as paging although this can become very complex. Page 19 of 25
20 3.1 (e) Spooling Spooling was mentioned in Section 3.1.a and is used to place input and output on a fast access device, such as disk, so that slow peripheral devices do not hold up the processor. In a multi-programming, multi-access or network system, several jobs may wish to use the peripheral devices at the same time. It is essential that the input and output for different jobs do not become mixed up. This can be achieved by using Simultaneous Peripheral Operations On-Line (spooling). Suppose two jobs, in a network system, are producing output that is to go to a single printer. The output is being produced in sections and must be kept separate for each job. Opening two files on a disk, one for each job, can do this. Suppose we call these files File1 and File2. As the files are on disk, job 1 can write to File1 whenever it wishes and job 2 can write to File2. When the output from a job is finished, the name (and other details) of the file can be placed in a queue. This means that the OS now can send the output to the printer in the order in which the file details enter the queue. As the name of a file does not enter the queue until all output from the job to the corresponding file is complete, the output from different jobs is kept separate. Spooling can be used for any number of jobs. It is important to realise that the output itself is not placed in the queue. The queue simply contains the details of the files that need to be printed so that the OS sends the contents of the files to the printer only when the file is complete. The part of the OS that handles this task is called the spooler or print spooler. It should be noted that spooling not only keeps output from different jobs separate, it also saves the user having to wait for the processor until the output is actually printed by a printer (a relatively slow device). Spooling is used on personal computers as well as large computer systems capable of multi-programming. Page 20 of 25
21 3.1. (f) Desktop PC Operating Systems There are basically two types of OS used on PC's. These are command driven and those that use a graphical user interface (GUI) such as MS-DOS and Windows. These differ in the way the user uses them and in the tasks that can be carried out. All OS's for PC's allow the user to copy, delete and move files as well as letting the user create a hierarchical structure for storing files. They also allow the user to check the disk and tidy up the files on the disk. However, Windows allows the user to use much more memory than MS-DOS and it allows multi-tasking. This is when the user opens more than one program at a time and can move from one to another. In fact the OS is swapping between the tasks so fast that the user is not aware of the swapping. Try playing a CD while writing a report! To do this, Windows uses the memory management techniques described in Section 3.1.c. In order to multi-task, Windows gives each application a very short period of time, called a time-slice. When a time-slice is up, an interrupt occurs and Windows passes control to the next application. The OS supplies the user, and applications programs, with facilities to handle input and output, copy and move files, handle memory allocation and any other basic tasks. In this Section you will see how an OS is loaded and how it controls the PC. Whenever PC is turned ON, BIOS saved in ROM takes the control, and it performs a lot of operations. It checks the hardware, ports etc and finally it loads the Master Boot Record (MBR) program into RAM and transfers control to it. MBR is saved on the first physical sector of the hard disk. MBR contains a small program known as bootstrap loader (responsible for booting into any OS) and partition table. Functions of MBR are as given below:- The boot process starts by executing code in the first sector of the disk, MBR. The MBR looks over the partition table to find the Active Partition (Hard disk s primary partition that holds the OS). Control is passed to that partition's boot record (PBR) to continue booting. This is the logical first sector that is 512 bytes sector at the start of a Primary Partition, which contains some programs to initialize or run OS files. The PBR locates the system-specific boot files. Then these following boot files continue the process of loading and initializing the rest of the OS. o IO.SYS A binary file that provides basic input/output interface between the ROM BIOS and the Hardware o MSDOS.SYS A binary file considered to be the core of the DOS operating system o CONFIG.SYS A text file used to load drivers and memory managers and also use to configure the system to the user s needs o COMMAND.COM DOS user interface loads the command prompt which interprets DOS commands Page 21 of 25
22 o AUTOEXEC.BAT A text file that contains setting up display settings, environment variables and routines In the case of Windows, the operating system loads into different parts of memory. The OS then guarantees the use of a block of memory to an application program and protects this memory from being accessed by another application program. If an application program needs to use a particular piece of hardware, Windows will load the appropriate device driver. Windows also uses virtual memory if an application has not been allocated sufficient main memory. If an application needs to use a hardware device, Windows checks to see if that device is available. If it is, the application is given the use of that device. If not, the request is placed in a queue. In the case of a slow peripheral such as a printer, Windows saves the output to the hard disk first and then does the printing in the background so that the user can continue to use the application. If further printing is needed before other printing is completed, then spooling is used as described in Section 3.1.e. Any OS has to be able to find files on a disk and to be able to store user's files. To do this, the OS uses the File Allocation Table (FAT). This table uses a linked list to point to the blocks on the disk that contain files. In order to do this the OS has a routine that will format a disk. This simply means dividing the disk radially into sectors and into concentric circles called tracks. Two or more sectors on a single track make up a cluster. This is shown in Fig. 3.1.f.1. Cluster using 3 sectors Sectors Tracks Fig 3.1.f.1 Page 22 of 25
23 A typical FAT table is shown in Fig 3.1.f.2. The first column gives the cluster number and the second column is a pointer to the next cluster used to store a file. The last cluster used has a null pointer (usually FFFF) to indicate the end of the linking. The directory entry for a file has a pointer to the first cluster in the FAT table. The diagram shows details of two files stored on a disk. Pointer from directory entry for File 1 Pointer from directory entry for File 2 Cluster Pointer 0 FFFD 1 FFFF FFFF FFFF Fig. 3.1.f.2 End of File 1 is in cluster 9 End of File 2 is in cluster 11 In order to find a file, the OS looks in the directory for the filename and, if it finds it, the OS gets the cluster number for the start of the file. The OS can then follow the pointers in the FAT to find the rest of the file. In this table any unused clusters have a zero entry. Thus, when a file is deleted, the clusters that were used to save the file can be set to zero. In order to store a new file, all the OS has to do is to find the first cluster with a zero entry and to enter the cluster number in the directory. Now the OS only has to linearly search for clusters with zero entries to set up the linked list. It may appear that using linear searches will take a long time. However, the FAT table is normally loaded into RAM so that continual disk accesses can be avoided. This will speed up the search of the FAT. Note that Windows 95/98 uses virtual FAT (VFAT) which allows files to be saved 32 bits at a time (FAT uses 16 bits). It also allows file names of up to 255 characters. Windows 98 uses FAT 32 which allows hard drives greater than 2 Gbytes to be formatted. Page 23 of 25
24 Glossary of Terms FAT: File Allocation Table, a data structure present in all FAT volumes/hdd partitions FAT12/ FAT16/ FAT32: FAT file system using 12/16/32-bit cluster addressing respectively FAT or FATxx: File systems that use File Allocation Tables, etc. VFAT: The 32-bit code used to operate the file system in Win9x GUI mode Cluster: Single unit of data storage at the FATxx file system logic level Sector: Single unit of storage at the physical disk level Physical sector address: Sector address in absolute physical hardware terms CHS sector address: As above, expressed in Cylinder, Head, Sector terms Logical sector address: Sector address relative to the FATxx volume Folder: A collection of named items as seen via Windows Explorer File Folder: Modern Windows-speak for "directory" Directory: A file system data structure that lists files and/or directories Directory entry: Points to a file or directory, and contains info about it Attributes: A collection of bits in a directory entry that describes it. Page 24 of 25
25 3.1 Example Questions 1. Explain how an interrupt is handled by a processor which is working on another task. (4) 2. State three different types of interrupt that may occur and say in what order the three interrupts would be handled if they all arrived at the processor together, giving a reason for your answer. (5) 3. Explain the difference between (i) Input/output bound jobs and (ii) Processor bound jobs. Explain how the computer should prioritise the two types of job if it is going to make maximum use of the system. (4) 4. Describe two types of scheduling that are used by computer processing systems. (4) 5. Explain how interrupts are used in a round robin scheduling operating system. (3) 6. a) Explain the difference between paging and segmenting when referring to memory management techniques. (2) b) Describe how virtual memory can allow a word processor and a spreadsheet to run simultaneously in the memory of a computer even though both pieces of software are too large to fit into the computer s memory. (3) 7. Describe how a PC operating system uses a file allocation table to find files when necessary. 8. What is the difference between multiprogramming and multitasking? (4) Page 25 of 25
B. V. Patel Institute of Business Management, Computer &Information Technology, UTU
BCA-3 rd Semester 030010304-Fundamentals Of Operating Systems Unit: 1 Introduction Short Answer Questions : 1. State two ways of process communication. 2. State any two uses of operating system according
More informationUNIT 4 Device Management
UNIT 4 Device Management (A) Device Function. (B) Device Characteristic. (C) Disk space Management. (D) Allocation and Disk scheduling Methods. [4.1] Device Management Functions The management of I/O devices
More informationOPERATING SYSTEMS CS3502 Spring Processor Scheduling. Chapter 5
OPERATING SYSTEMS CS3502 Spring 2018 Processor Scheduling Chapter 5 Goals of Processor Scheduling Scheduling is the sharing of the CPU among the processes in the ready queue The critical activities are:
More informationAddresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.
1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and
More informationOperating Systems Unit 6. Memory Management
Unit 6 Memory Management Structure 6.1 Introduction Objectives 6.2 Logical versus Physical Address Space 6.3 Swapping 6.4 Contiguous Allocation Single partition Allocation Multiple Partition Allocation
More informationLecture Topics. Announcements. Today: Uniprocessor Scheduling (Stallings, chapter ) Next: Advanced Scheduling (Stallings, chapter
Lecture Topics Today: Uniprocessor Scheduling (Stallings, chapter 9.1-9.3) Next: Advanced Scheduling (Stallings, chapter 10.1-10.4) 1 Announcements Self-Study Exercise #10 Project #8 (due 11/16) Project
More informationComputer-System Architecture (cont.) Symmetrically Constructed Clusters (cont.) Advantages: 1. Greater computational power by running applications
Computer-System Architecture (cont.) Symmetrically Constructed Clusters (cont.) Advantages: 1. Greater computational power by running applications concurrently on all computers in the cluster. Disadvantages:
More informationOPERATING SYSTEM. Functions of Operating System:
OPERATING SYSTEM Introduction: An operating system (commonly abbreviated to either OS or O/S) is an interface between hardware and user. OS is responsible for the management and coordination of activities
More informationOPERATING SYSTEMS. After A.S.Tanenbaum, Modern Operating Systems, 3rd edition. Uses content with permission from Assoc. Prof. Florin Fortis, PhD
OPERATING SYSTEMS #5 After A.S.Tanenbaum, Modern Operating Systems, 3rd edition Uses content with permission from Assoc. Prof. Florin Fortis, PhD General information GENERAL INFORMATION Cooperating processes
More informationScheduling. The Basics
The Basics refers to a set of policies and mechanisms to control the order of work to be performed by a computer system. Of all the resources in a computer system that are scheduled before use, the CPU
More informationInput/Output Management
Chapter 11 Input/Output Management This could be the messiest aspect of an operating system. There are just too much stuff involved, it is difficult to develop a uniform and consistent theory to cover
More informationLecturer 4: File Handling
Lecturer 4: File Handling File Handling The logical and physical organisation of files. Serial and sequential file handling methods. Direct and index sequential files. Creating, reading, writing and deleting
More informationComputer Hardware and System Software Concepts
Computer Hardware and System Software Concepts Introduction to concepts of Operating System (Process & File Management) Welcome to this course on Computer Hardware and System Software Concepts 1 RoadMap
More informationChapter 8 & Chapter 9 Main Memory & Virtual Memory
Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array
More informationAnnouncements. Program #1. Program #0. Reading. Is due at 9:00 AM on Thursday. Re-grade requests are due by Monday at 11:59:59 PM.
Program #1 Announcements Is due at 9:00 AM on Thursday Program #0 Re-grade requests are due by Monday at 11:59:59 PM Reading Chapter 6 1 CPU Scheduling Manage CPU to achieve several objectives: maximize
More informationProcess- Concept &Process Scheduling OPERATING SYSTEMS
OPERATING SYSTEMS Prescribed Text Book Operating System Principles, Seventh Edition By Abraham Silberschatz, Peter Baer Galvin and Greg Gagne PROCESS MANAGEMENT Current day computer systems allow multiple
More informationMemory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization
Requirements Relocation Memory management ability to change process image position Protection ability to avoid unwanted memory accesses Sharing ability to share memory portions among processes Logical
More informationMemory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts
Memory management Last modified: 26.04.2016 1 Contents Background Logical and physical address spaces; address binding Overlaying, swapping Contiguous Memory Allocation Segmentation Paging Structure of
More informationMajor OS Achievements. Chris Collins. 15 th October 2006
Major OS Achievements 1 Running head: MAJOR OS ACHIEVEMENTS Major OS Achievements Chris Collins 15 th October 2006 Major OS Achievements 2 Introduction This paper discusses several major achievements in
More informationComputer Science 4500 Operating Systems
Computer Science 4500 Operating Systems Module 6 Process Scheduling Methods Updated: September 25, 2014 2008 Stanley A. Wileman, Jr. Operating Systems Slide 1 1 In This Module Batch and interactive workloads
More informationUnit 3 : Process Management
Unit : Process Management Processes are the most widely used units of computation in programming and systems, although object and threads are becoming more prominent in contemporary systems. Process management
More informationChapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition
Chapter 8: Memory- Management Strategies Operating System Concepts 9 th Edition Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation
More informationLogical File Organisation A file is logically organised as follows:
File Handling The logical and physical organisation of files. Serial and sequential file handling methods. Direct and index sequential files. Creating, reading, writing and deleting records from a variety
More informationOperating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France
Operating Systems Memory Management Mathieu Delalandre University of Tours, Tours city, France mathieu.delalandre@univ-tours.fr 1 Operating Systems Memory Management 1. Introduction 2. Contiguous memory
More informationCPU Scheduling. Rab Nawaz Jadoon. Assistant Professor DCS. Pakistan. COMSATS, Lahore. Department of Computer Science
CPU Scheduling Rab Nawaz Jadoon DCS COMSATS Institute of Information Technology Assistant Professor COMSATS, Lahore Pakistan Operating System Concepts Objectives To introduce CPU scheduling, which is the
More informationIntroduction to OS. File Management. MOS Ch. 4. Mahmoud El-Gayyar. Mahmoud El-Gayyar / Introduction to OS 1
Introduction to OS File Management MOS Ch. 4 Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Introduction to OS 1 File Management Objectives Provide I/O support for a variety of storage device
More informationUNIT III MEMORY MANAGEMENT
UNIT III MEMORY MANAGEMENT TOPICS TO BE COVERED 3.1 Memory management 3.2 Contiguous allocation i Partitioned memory allocation ii Fixed & variable partitioning iii Swapping iv Relocation v Protection
More informationMain Points of the Computer Organization and System Software Module
Main Points of the Computer Organization and System Software Module You can find below the topics we have covered during the COSS module. Reading the relevant parts of the textbooks is essential for a
More informationProcess Scheduling. Copyright : University of Illinois CS 241 Staff
Process Scheduling Copyright : University of Illinois CS 241 Staff 1 Process Scheduling Deciding which process/thread should occupy the resource (CPU, disk, etc) CPU I want to play Whose turn is it? Process
More informationMain Memory (Part I)
Main Memory (Part I) Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) Main Memory 1393/8/5 1 / 47 Motivation and Background Amir
More informationOPERATING SYSTEMS. G.C.E. Advanced Level ICT
OPERATING SYSTEMS G.C.E. Advanced Level ICT Turning on your computer When you turn on the power to a computer, the first program that runs is usually a set of instructions kept in the computer's read-only
More informationVirtual Memory. CSCI 315 Operating Systems Design Department of Computer Science
Virtual Memory CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those from an earlier edition of the course text Operating
More informationCSE 4/521 Introduction to Operating Systems. Lecture 27 (Final Exam Review) Summer 2018
CSE 4/521 Introduction to Operating Systems Lecture 27 (Final Exam Review) Summer 2018 Overview Objective: Revise topics and questions for the final-exam. 1. Main Memory 2. Virtual Memory 3. Mass Storage
More informationCHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed.
CHAPTER 11: IMPLEMENTING FILE SYSTEMS (COMPACT) By I-Chen Lin Textbook: Operating System Concepts 9th Ed. File-System Structure File structure Logical storage unit Collection of related information File
More informationOperating System Support
Operating System Support Objectives and Functions Convenience Making the computer easier to use Efficiency Allowing better use of computer resources Layers and Views of a Computer System Operating System
More informationCOMPUTER SCIENCE 4500 OPERATING SYSTEMS
Last update: 3/28/2017 COMPUTER SCIENCE 4500 OPERATING SYSTEMS 2017 Stanley Wileman Module 9: Memory Management Part 1 In This Module 2! Memory management functions! Types of memory and typical uses! Simple
More informationVirtual Memory. Chapter 8
Virtual Memory 1 Chapter 8 Characteristics of Paging and Segmentation Memory references are dynamically translated into physical addresses at run time E.g., process may be swapped in and out of main memory
More information1. Operating System Concepts
1. Operating System Concepts 1.1 What is an operating system? Operating systems are an essential part of any computer system. An operating system (OS) is software, which acts as an intermediary between
More informationIntroduction. CS3026 Operating Systems Lecture 01
Introduction CS3026 Operating Systems Lecture 01 One or more CPUs Device controllers (I/O modules) Memory Bus Operating system? Computer System What is an Operating System An Operating System is a program
More informationCh 4 : CPU scheduling
Ch 4 : CPU scheduling It's the basis of multiprogramming operating systems. By switching the CPU among processes, the operating system can make the computer more productive In a single-processor system,
More informationOPERATING SYSTEM. PREPARED BY : DHAVAL R. PATEL Page 1. Q.1 Explain Memory
Q.1 Explain Memory Data Storage in storage device like CD, HDD, DVD, Pen drive etc, is called memory. The device which storage data is called storage device. E.g. hard disk, floppy etc. There are two types
More informationChapter 8: Memory-Management Strategies
Chapter 8: Memory-Management Strategies Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and
More informationOPERATING SYSTEMS. COMS W1001 Introduction to Information Science. Boyi Xie
1 OPERATING SYSTEMS COMS W1001 Introduction to Information Science Boyi Xie 2 Announcement Homework 1 is available Grace days A total of 5 days for 5 HWs If all grace days have been used, 50% of the points
More informationSubject Teacher: Prof. Sheela Bankar
Peoples Empowerment Group ISB&M SCHOOL OF TECHNOLOGY, NANDE, PUNE DEPARTMENT OF COMPUTER ENGINEERING Academic Year 2017-18 Subject: SP&OS Class: T.E. computer Subject Teacher: Prof. Sheela Bankar 1. Explain
More informationTechno India Batanagar Department of Computer Science & Engineering. Model Questions. Multiple Choice Questions:
Techno India Batanagar Department of Computer Science & Engineering Model Questions Subject Name: Operating System Multiple Choice Questions: Subject Code: CS603 1) Shell is the exclusive feature of a)
More informationOperating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy
Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference
More information15 Sharing Main Memory Segmentation and Paging
Operating Systems 58 15 Sharing Main Memory Segmentation and Paging Readings for this topic: Anderson/Dahlin Chapter 8 9; Siberschatz/Galvin Chapter 8 9 Simple uniprogramming with a single segment per
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 9 Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 CPU Scheduling: Objectives CPU scheduling,
More informationThe functionality. Managing more than Operating
The functionality Managing more than Operating Remember This? What to Manage Processing CPU and Memory Storage Input and Output Devices Functions CPU - Process management RAM - Memory management Storage
More informationMultiprogramming. Evolution of OS. Today. Comp 104: Operating Systems Concepts 28/01/2013. Processes Management Scheduling & Resource Allocation
Comp 104: Operating Systems Concepts Management Scheduling & Resource Allocation Today OS evolution Introduction to processes OS structure 1 2 Evolution of OS Largely driven by desire to do something useful
More informationAnnouncements. Program #1. Reading. Due 2/15 at 5:00 pm. Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed)
Announcements Program #1 Due 2/15 at 5:00 pm Reading Finish scheduling Process Synchronization: Chapter 6 (8 th Ed) or Chapter 7 (6 th Ed) 1 Scheduling criteria Per processor, or system oriented CPU utilization
More informationProcesses, PCB, Context Switch
THE HONG KONG POLYTECHNIC UNIVERSITY Department of Electronic and Information Engineering EIE 272 CAOS Operating Systems Part II Processes, PCB, Context Switch Instructor Dr. M. Sakalli enmsaka@eie.polyu.edu.hk
More informationCHAPTER 8 - MEMORY MANAGEMENT STRATEGIES
CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES OBJECTIVES Detailed description of various ways of organizing memory hardware Various memory-management techniques, including paging and segmentation To provide
More informationOperating Systems. Computer Science & Information Technology (CS) Rank under AIR 100
GATE- 2016-17 Postal Correspondence 1 Operating Systems Computer Science & Information Technology (CS) 20 Rank under AIR 100 Postal Correspondence Examination Oriented Theory, Practice Set Key concepts,
More informationChapter 8: Main Memory. Operating System Concepts 9 th Edition
Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel
More information16 Sharing Main Memory Segmentation and Paging
Operating Systems 64 16 Sharing Main Memory Segmentation and Paging Readings for this topic: Anderson/Dahlin Chapter 8 9; Siberschatz/Galvin Chapter 8 9 Simple uniprogramming with a single segment per
More informationChapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition
Chapter 7: Main Memory Operating System Concepts Essentials 8 th Edition Silberschatz, Galvin and Gagne 2011 Chapter 7: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure
More informationOutlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium
Main Memory Outlook Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium 2 Backgound Background So far we considered how to share
More informationProcess Scheduling Part 2
Operating Systems and Computer Networks Process Scheduling Part 2 pascal.klein@uni-due.de Alexander Maxeiner, M.Sc. Faculty of Engineering Agenda Process Management Time Sharing Synchronization of Processes
More informationInitial Bootloader. On power-up, when a computer is turned on, the following operations are performed:
Initial Bootloader Introduction On power-up, when a computer is turned on, the following operations are performed: 1. The computer performs a power on self test (POST) to ensure that it meets the necessary
More informationGeneral Objectives: To understand the process management in operating system. Specific Objectives: At the end of the unit you should be able to:
F2007/Unit5/1 UNIT 5 OBJECTIVES General Objectives: To understand the process management in operating system Specific Objectives: At the end of the unit you should be able to: define program, process and
More informationWith regard to operating systems the kernel consists of the most frequently used functions in the operating system and it kept in main memory.
CS 320 Ch 8 Operating Systems An operating system controls execution of applications and acts as an interface between the user and the computer. A typical OS provides the following services: Program Creation
More informationChapter 8: Main Memory
Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel
More informationFile Directories Associated with any file management system and collection of files is a file directories The directory contains information about
1 File Management 2 File Directories Associated with any file management system and collection of files is a file directories The directory contains information about the files, including attributes, location
More informationChapter 8: Main Memory
Chapter 8: Main Memory Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and 64-bit Architectures Example:
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2018 L17 Main Memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Was Great Dijkstra a magician?
More informationScheduling of processes
Scheduling of processes Processor scheduling Schedule processes on the processor to meet system objectives System objectives: Assigned processes to be executed by the processor Response time Throughput
More informationMARUTHI SCHOOL OF BANKING (MSB)
MARUTHI SCHOOL OF BANKING (MSB) SO IT - OPERATING SYSTEM(2017) 1. is mainly responsible for allocating the resources as per process requirement? 1.RAM 2.Compiler 3.Operating Systems 4.Software 2.Which
More informationIntroduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras
Introduction to Operating Systems Prof. Chester Rebeiro Department of Computer Science and Engineering Indian Institute of Technology, Madras Week 05 Lecture 18 CPU Scheduling Hello. In this lecture, we
More informationCS3733: Operating Systems
CS3733: Operating Systems Topics: Process (CPU) Scheduling (SGG 5.1-5.3, 6.7 and web notes) Instructor: Dr. Dakai Zhu 1 Updates and Q&A Homework-02: late submission allowed until Friday!! Submit on Blackboard
More informationProcesses. CS 475, Spring 2018 Concurrent & Distributed Systems
Processes CS 475, Spring 2018 Concurrent & Distributed Systems Review: Abstractions 2 Review: Concurrency & Parallelism 4 different things: T1 T2 T3 T4 Concurrency: (1 processor) Time T1 T2 T3 T4 T1 T1
More informationBackground Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation
Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Basic Hardware Address Binding Logical VS Physical Address Space Dynamic Loading Dynamic Linking and Shared
More informationChapter-6. SUBJECT:- Operating System TOPICS:- I/O Management. Created by : - Sanjay Patel
Chapter-6 SUBJECT:- Operating System TOPICS:- I/O Management Created by : - Sanjay Patel Disk Scheduling Algorithm 1) First-In-First-Out (FIFO) 2) Shortest Service Time First (SSTF) 3) SCAN 4) Circular-SCAN
More informationComputer Systems Assignment 4: Scheduling and I/O
Autumn Term 018 Distributed Computing Computer Systems Assignment : Scheduling and I/O Assigned on: October 19, 018 1 Scheduling The following table describes tasks to be scheduled. The table contains
More informationTopic 4 Scheduling. The objective of multi-programming is to have some process running at all times, to maximize CPU utilization.
Topic 4 Scheduling The objective of multiprogramming is to have some process running at all times, to maximize CPU utilization. The objective of time sharing is to switch the CPU among processes so frequently.
More informationMemory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358
Memory Management Reading: Silberschatz chapter 9 Reading: Stallings chapter 7 1 Outline Background Issues in Memory Management Logical Vs Physical address, MMU Dynamic Loading Memory Partitioning Placement
More informationObjectives and Functions Convenience. William Stallings Computer Organization and Architecture 7 th Edition. Efficiency
William Stallings Computer Organization and Architecture 7 th Edition Chapter 8 Operating System Support Objectives and Functions Convenience Making the computer easier to use Efficiency Allowing better
More informationChapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1
Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1 Chapter 9: Memory Management Background Swapping Contiguous Memory Allocation Segmentation
More informationOperating System - Overview
Unit 37. Operating System Operating System - Overview An Operating System (OS) is an interface between a computer user and computer hardware. An operating system is a software which performs all the basic
More informationChapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition
Chapter 8: Memory- Management Strategies Operating System Concepts 9 th Edition Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation
More informationChapter 8: Memory- Management Strategies
Chapter 8: Memory Management Strategies Chapter 8: Memory- Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and
More informationInput Output (IO) Management
Input Output (IO) Management Prof. P.C.P. Bhatt P.C.P Bhatt OS/M5/V1/2004 1 Introduction Humans interact with machines by providing information through IO devices. Manyon-line services are availed through
More informationCHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.
CHAPTER 8: MEMORY MANAGEMENT By I-Chen Lin Textbook: Operating System Concepts 9th Ed. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the
More informationDirectory Structure and File Allocation Methods
ISSN:0975-9646 Mandeep Kaur et al, / (IJCSIT) International Journal of Computer Science and Information Technologies, Vol. 7 (2), 2016, 577-582 Directory Structure and ile Allocation Methods Mandeep Kaur,
More informationCOSC243 Part 2: Operating Systems
COSC243 Part 2: Operating Systems Lecture 14: Introduction, and history of operating systems Zhiyi Huang Dept. of Computer Science, University of Otago Zhiyi Huang (Otago) COSC243 Lecture 14 1 / 27 General
More informationIntroduction to Operating Systems
Reading ICT106 Fundamentals of Computer Systems Topic 12 Introduction to Operating Systems Irvine (4 th Edition) Chapter 13 16-bit MS-DOS Chapter 14 Disk Fundamentals Chapter 2 Section 2.3 IA-32 Memory
More informationDownload from Powered By JbigDeaL
1. If a word is typed that is not in Word s dictionary, a wavy underline appears below the word. (A) red (B) green (C) blue (D) black 2. The button on the quick access Toolbar allows you to cancel your
More informationChapter 6: CPU Scheduling. Operating System Concepts 9 th Edition
Chapter 6: CPU Scheduling Silberschatz, Galvin and Gagne 2013 Chapter 6: CPU Scheduling Basic Concepts Scheduling Criteria Scheduling Algorithms Thread Scheduling Multiple-Processor Scheduling Real-Time
More informationSome popular Operating Systems include Linux Operating System, Windows Operating System, VMS, OS/400, AIX, z/os, etc.
Operating System Quick Guide https://www.tutorialspoint.com/operating_system/os_quick_guide.htm Copyright tutorialspoint.com Operating System Overview An Operating System OS is an interface between a computer
More informationCS Operating Systems
CS 4500 - Operating Systems Module 9: Memory Management - Part 1 Stanley Wileman Department of Computer Science University of Nebraska at Omaha Omaha, NE 68182-0500, USA June 9, 2017 In This Module...
More informationCS Operating Systems
CS 4500 - Operating Systems Module 9: Memory Management - Part 1 Stanley Wileman Department of Computer Science University of Nebraska at Omaha Omaha, NE 68182-0500, USA June 9, 2017 In This Module...
More informationIntroduction to Computer Systems and Operating Systems
Introduction to Computer Systems and Operating Systems Minsoo Ryu Real-Time Computing and Communications Lab. Hanyang University msryu@hanyang.ac.kr Topics Covered 1. Computer History 2. Computer System
More informationPreview. Process Scheduler. Process Scheduling Algorithms for Batch System. Process Scheduling Algorithms for Interactive System
Preview Process Scheduler Short Term Scheduler Long Term Scheduler Process Scheduling Algorithms for Batch System First Come First Serve Shortest Job First Shortest Remaining Job First Process Scheduling
More informationMemory and multiprogramming
Memory and multiprogramming COMP342 27 Week 5 Dr Len Hamey Reading TW: Tanenbaum and Woodhull, Operating Systems, Third Edition, chapter 4. References (computer architecture): HP: Hennessy and Patterson
More informationOperating Systems Unit 3
Unit 3 CPU Scheduling Algorithms Structure 3.1 Introduction Objectives 3.2 Basic Concepts of Scheduling. CPU-I/O Burst Cycle. CPU Scheduler. Preemptive/non preemptive scheduling. Dispatcher Scheduling
More informationScheduling. Scheduling 1/51
Scheduling 1/51 Scheduler Scheduling Scheduler allocates cpu(s) to threads and processes. This action is known as scheduling. The scheduler is a part of the process manager code that handles scheduling.
More informationReview. Preview. Three Level Scheduler. Scheduler. Process behavior. Effective CPU Scheduler is essential. Process Scheduling
Review Preview Mutual Exclusion Solutions with Busy Waiting Test and Set Lock Priority Inversion problem with busy waiting Mutual Exclusion with Sleep and Wakeup The Producer-Consumer Problem Race Condition
More informationWait-For Graph. Exercise. Wait-For Graph. Today. Comp 104: Operating Systems Concepts. Corresponding Wait-For Graph 13/01/2014. Process Scheduling
Comp 104: Operating Systems Concepts Process Scheduling Today Deadlock Wait-for graphs Detection and recovery Process scheduling Scheduling algorithms First-come, first-served (FCFS) Shortest Job First
More informationCS 326: Operating Systems. CPU Scheduling. Lecture 6
CS 326: Operating Systems CPU Scheduling Lecture 6 Today s Schedule Agenda? Context Switches and Interrupts Basic Scheduling Algorithms Scheduling with I/O Symmetric multiprocessing 2/7/18 CS 326: Operating
More informationMemory Management. CSCI 315 Operating Systems Design Department of Computer Science
Memory Management CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture are based on those from Operating Systems Concepts, 9th ed., by Silberschatz, Galvin,
More information