Operating System 1 (ECS-501)

Size: px
Start display at page:

Download "Operating System 1 (ECS-501)"

Transcription

1 Operating System 1 (ECS-501) Unit- IV Memory Management 1.1 Bare Machine: Introduction: It has the ability to recover the operating system of a machine to the identical state it was at a given point in time. Importance of ensuring that the operating system is internally consistent as at a particular point in time Why Bare Machine Required: Loss of or damage to the hardware containing the operating system failure of HDD or RAID system. Corruption of the operating system files (virus human error, software malfunction). Disasters affecting multiple machines fire, explosion flood etc Bare Machine Strategies: a) System Reinstallation: (Full System reinstallation using installation CDs ) Suitable for standardized machines which are completely controlled centrally. Can be a lengthy process. Cannot be standardized across different operating systems. Will handle to restore to different hardware. b) System Backup and restore: (File by file backup, Image backup) May require additional storage. Regular Backups need to be scheduled and administered. Requires additional software to handle the backup and restore processes. Requires a basic operating system with the necessary drivers to access all required devices. Requires knowledge of how to configure resources partition disks, configure network and boot details.

2 Operating System 2 (ECS-501) 1.2 Resident Monitor: A Resident monitor (1950s-1970s) was a piece of software that was an integral part of a general-use computer using punched card input (Batch systems). The resident monitor governed the machine before and after each control card was executed, loaded and interpreted each control card, and acted as a job sequencer for batch operations. Early computers were physically enormous machine run from the console. First the program has to be loaded manually from the front panel switches or from the punched cards. Then appropriate button is pushed to locate the starting address and therefore start the execution of the program. Resident monitors were replaced by the boot monitor, boot loader or BIOS, and the operating system kernel, when rewritable base instruction sets became necessary. The primitive operating system in charge of executing the batch job was called a resident monitor. it resided permanently in memory & monitored the execution of each job in succession. The control-card interpreter is responsible for reading and carrying out the instructions on the cards at the point of execution. The loader is invoked by the control-card interpreter to load system programs and application programs into memory at intervals. The device drivers are used by both the control-card interpreter and the loader for the system's I/O devices to perform I/O. Often, the system and application programs are linked to these same device drivers, providing continuity in their operation, as well as saving memory space and programming time.

3 Operating System 3 (ECS-501) 1.3 Memory Management: Introduction: In a multiprogramming system, in order to share the processor, a number of processes must be kept in memory. Memory management is achieved through memory management algorithms. Each memory management algorithm requires its own hardware support. In order to be able to load programs at anywhere in memory, the compiler must generate relocatable object code. Also we must make it sure that a program in memory, addresses only its own area, and no other program s area. Therefore, some protection mechanism is also needed Address Binding: A Program resides on a disk as a binary executable file. Program must be brought into memory and placed within a process for it to be executed. Execution is performed from the Input Queue the binding of instructions and data to memory addresses can be done at any step along the way: a) Compile Time: the process will reside in memory, then absolute code can be generated. For example, if you know that a user process will reside starting at location R, then the generated compiler code will start at that location and extend up from there. If, at some later time, the starting location changes, then it will be necessary to recompile this code. b) Load Time: If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code. In this case, final binding is delayed until load time. If the starting address changes, we need only reload the user code to incorporate this changed value.

4 Operating System 4 (ECS-501) c) Execution Time: If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time Logical Vs. Physical Address Space: Logical Address: generated by the CPU; also referred to as virtual address. Physical address address seen by the memory unit Logical and physical addresses are the same in compile-time and load-time addressbinding schemes. logical (virtual) and physical addresses differ in execution-time address-binding scheme. The run time mapping from virtual to physical addresses is done by a hardware device called the memory management unit (MMU). In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory. The user program deals with logical addresses; it never sees the real physical addresses.

5 Operating System 5 (ECS-501) Dynamic Loading: To obtain better memory space utilization, we can use dynamic loading. With dynamic loading, a routine is not loaded until it is called. All routines are kept on disk in re locatable load format. The main program is loaded into memory and is executed. When a routine needs to call another routine, the calling routine first check to see weather the other routine has been loaded. If not, the re locatable loader is called to load the desired routine into memory. Then the control is passes to the newly loaded routine. The advantage of dynamic loading is that an unused routine is never loaded. This method is particularly useful when large amounts of code are needed to handle infrequently occurring cases, such as error routines. Although the total program size may be large, the portion that is used may be much smaller. Dynamic loading does not require special support from the operating system. It is the responsibility of the users to design their programs to take advantage of such a method. However, by providing library routines to implement dynamic loading Dynamic Linking & Shared Libraries: The concept of dynamic linking is similar to that of dynamic loading. Here, though, linking, rather than loading, is postponed until execution time. This feature is usually used with system libraries, such as language subroutine libraries. Some programs linked before the new library was installed will continue using the older library. This system is known as shared libraries Overlays: The idea of overlays is to keep in memory only those instructions and data that are needed at any given time. When other instructions are needed, they are loaded into space occupied previously by instruction that are no longer needed.

6 Operating System 6 (ECS-501) Contiguous Memory Allocation: The memory is usually divided into partitions one for resident operating system (low memory) and one for the user processes (high memory) Memory Protection: In this, we protect the O.S. from user process and protecting user processes from one another. We can provide this protection by using a relocation register/ (i) Base Register contains value of smallest physical address. (ii) Limit Register contains range of logical addresses. Each logical address must be less than limit register. (iii) MMU maps logical address dynamically. (Hardware support for relocation & limit register) Multiprogramming with fixed partitions In this method, memory is divided into partitions whose sizes are fixed. OS is placed into the lowest bytes of memory. Processes are classified on entry to the system according to their memory they requirements. We need one Process Queue (PQ) for each class of process. If a process is selected to allocate memory, then it goes into memory and competes for the processor. The number of fixed partition gives the degree of multiprogramming. Since each queue has its own memory region, there is no competition between queues for the memory.

7 Operating System 7 (ECS-501) Fixed Partitioning with Swapping This is a version of fixed partitioning that uses RRS with some time quantum. When time quantum for a process expires, it is swapped out of memory to disk and the next process in the corresponding process queue is swapped into the memory. Normally, a process swapped out will eventually be swapped back into the same partition. But this restriction can be relaxed with dynamic relocation. In some cases, a process executing may request more memory than its partition size. Say we have a 6 KB process running in 6 KB partition and it now requires a more memory of 1KB. Then, the following policies are possible: a) Return control to the user program. Let the program decide either quit or modify its operation so that it can run (possibly slow) in less space. b) Abort the process. (The user states the maximum amount of memory that the process will need, so it is the user s responsibility to stick to that limit). c) If dynamic relocation is being used, swap the process out to the next largest PQ and locate into that partition when its turn comes. The main problem with the fixed partitioning method is how to determine the number of partitions, and how to determine their sizes.

8 Operating System 8 (ECS-501) Fragmentation: If a whole partition is currently not being used, then it is called an external fragmentation. And if a partition is being used by a process requiring some memory smaller than the partition size, then it is called an internal fragmentation. In this composition of memory, if a new process, P3, requiring 8 KB of memory comes, although there is enough total space in memory, it cannot be loaded because fragmentation Variable Partitioning: With fixed partitions we have to deal with the problem of determining the number and sizes of partitions to minimize internal and external fragmentation. If we use variable partitioning instead, then partition sizes may vary dynamically. In the variable partitioning method, we keep a table (linked list) indicating used/free areas in memory. Initially, the whole memory is free and it is considered as one large block. When a new process arrives, the OS searches for a block of free memory large enough for that process. We keep the rest available (free) for the future processes. If a block becomes free, then the OS tries to merge it with its neighbors if they are also free.

9 Operating System 9 (ECS-501) There are three algorithms for searching the list of free blocks for a specific amount of memory. (DYNAMIC STORAGE ALLOCATION PROBLEM) a) First Fit : Allocate the first free block that is large enough for the new process. This is a fast algorithm. b) Best Fit : Allocate the smallest block among those that are large enough for the new process. In this method, the OS has to search the entire list, or it can keep it sorted and stop when it hits an entry which has a size larger than the size of new process. This algorithm produces the smallest left over block. However, it requires more time for searching all the list or sorting it. c) Worst Fit: Allocate the largest block among those that are large enough for the new process. Again a search of the entire list or sorting it is needed. This algorithm produces the largest over block.

10 Operating System 10 (ECS-501) Compaction: Compaction is a method to overcome the external fragmentation problem. All free blocks are brought together as one large block of free space. Compaction requires dynamic relocation. Certainly, compaction has a cost and selection of an optimal compaction strategy is difficult. One method for compaction is swapping out those processes that are to be moved within the memory, and swapping them into different memory locations Paging (Non-Contiguous Memory Allocation): Paging is a memory-management scheme that permits the physical address space of a process to be noncontiguous. Paging avoids the considerable problem of fitting memory chunks of varying sizes onto the backing store; from which most of the previous memory management schemes suffered. Paging has been handled by hardware Basic Method: Physical memory is broken into fixed-sized blocks called frames. (size is power of 2, between 512 bytes and 8,192 bytes). Logical memory is broken into blocks of the same size called pages.

11 Operating System 11 (ECS-501) When a process is to be executed, its pages are loaded into any available memory frames from the backing store. The backing store is divided into fixed-sized blocks that are of the same size as the memory frames. Every address generated by the CPU is divided into two parts: a page number (p) and a page offset (d). The page number is used as an index into a page table. The page table contains the base address of each page in physical memory. This base address is combined with the page offset to define the physical memory address that is sent to the memory unit. Page no Frame no Offset 0 Offset 1.. Offset n-1 Physical address = (page size * frame no) + offset value Paging Model of logical and physical memory

12 Operating System 12 (ECS-501) Example 1: a. Show the physical memory implementation using logical memory & page table as in figure. b. find the physical address of logical address 3. Ans. (4*5)+3=23 c. find the physical address of logical address 13. Ans. (4*2)+1=9 d. If pages are of 2,048 bytes, a process of 72,766. How much frames will be needed. Ans. 36 (Hint: 35 pages+ 1,086 bytes) Paging Example for a 32- byte memory with 4-byte pages. Example 2:Consider a logical address space of eight pages of 1,024 words each, mapped onto a physical memory of 32 frames. a. How many bits are in the logical address? b. How many bits are in the physical address?

13 Operating System 13 (ECS-501) Hardware Support (Implementation): Page table is kept in main memory Page-table base register (PTBR) points to the page table Page-table length register (PRLR) indicates size of the page table In this scheme every data/instruction access requires two memory accesses. One for the page table and one for the data/instruction. The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs). The Percentage of time that a particular page number is found in TLB is called the hit ratio. Example1: Consider a paging system with the page table stored in memory. a. If a memory reference takes 200 nanoseconds, how long does a paged memory reference take? Ans. 400 nanoseconds; 200 nanoseconds to access the page table and 200 nanoseconds to access the word in memory. b. If we add associative registers, and 75 percent of all page-table references are found in the associative registers, what is the effective memory reference time? (Assume that finding a pagetable entry in the associative registers takes zero time, if the entry is there.) Ans. Effective access time = 0.75 (200 nanoseconds) (400 nanoseconds) = 250 nanoseconds.

14 Operating System 14 (ECS-501) Example2: If the hit ratio is 80% and 20 nanoseconds to search the page table & 100 nanosecond to access the memory. What will be the effective memory access-time. Ans. Effective access time = = 0.80 (120 nanoseconds) (220 nanoseconds) = 140 nanoseconds Protection: Memory protection implemented by associating protection bit with each frame Valid-invalid bit attached to each entry in the page table: valid indicates that the associated page is in the process logical address space, and is thus a legal page. invalid indicates that the page is not in the process logical address space.

15 Operating System 15 (ECS-501) Segmentation: Basic Introduction: a) Memory-management scheme that supports user view of memory. b) A program is a collection of segments. A segment is a logical unit such as: main program, user s view of a program procedure, function, method, object, local variables, global variables, common block, stack, symbol table, arrays c) Logical address consists of a two tuple: <segment-number, offset>, d) Segment table maps two-dimensional physical addresses; each table entry has: base contains the starting physical address where the segments reside in memory limit specifies the length of the segment e) Segment-table base register (STBR) points to the segment table s location in memory f) Segment-table length register (STLR) indicates number of segments used by a program; segment number s is legal if s < STLR

16 Operating System 16 (ECS-501) Example: Problem 1: What will be physical memory mapping of byte 53 of segment 2. Ans = 4353.

17 Operating System 17 (ECS-501) Problem 2: What will be physical memory mapping of segment 3 rd byte 852 Ans (the base of segment 3) = Problem 3: What will be physical memory mapping of to byte 1222 of segment 0? Ans. in a trap to the operating system, as this segment is only 1,000 bytes long Hardware Support: Memory Protection & Sharing: a) Protection With each entry in segment table associate: validation bit = 0 illegal segment read/write/execute privileges b) Protection bits associated with segments; code sharing occurs at segment level c) Since segments vary in length, memory allocation is a dynamic storage-allocation problem d) Another advantage of segmentation involves the sharing of code or data.

18 Operating System 18 (ECS-501) Fragmentation: a) The long term scheduler must find and allocate memory for all the segment of a user program. This situation is similar to paging except that the segments are of variable length, pages are all the same size. b) Thus, as with the variable sized partition scheme memory allocation is a dynamic storage allocation problem usually solved with a best bit or first fit algo. c) Segmentation may cause external fragmentation, when all blocks of free memory too small to accommodate a segment.

19 Operating System 19 (ECS-501) Segmentation with paging: a) The logical address space of a process is divided into two partitions. The first partition consist of upto 8KB segments that are private to that process. Second partition consists of upto 8KB segments that are shared among all the processes. b) Implementation about first partition is kept in the Local Descriptor Table (LDT) & about second partition is kept in the Global Descriptor Table (GDT). c) The logical address is a pair (selector, offset), where the selector is a 16 bit number. The offset is a 32-bit number specifying the location of the byte within the segment in question. s g p s: segment no. selector (16 bit) g: whether the segment is in GDT or LDT p: protection d) Paging Scheme as follows Page number Page offset P1 p2 d e) In figure, First the limit is used to check for address validity. If the address is not valid, a memory fault is generated, resulting in a trap to the operating system. If it is valid, the value of offset is added to the value of the base, resulting in a 32-bit linear address. This address is then translated into a physical address.

20 Operating System 20 (ECS-501) 1.4 Virtual Memory: Virtual memory is a technique that allows the execution of processes that are not completely in memory. One major advantage of this scheme is that programs can be larger than physical memory. A program would no longer be constrained by the amount of physical memory that is available. Users would be able to write programs for an extremely large virtual address space, simplifying the programming task. Because each user program could take less physical memory, programs could be run at the same time, with a corresponding increase in CPU utilization and throughput but with no increase in response time or turnaround time. Less I/O would be needed to load or swap each user program into memory, so each user program would run faster.

21 Operating System 21 (ECS-501) Virtual Memory Virtual Address space Virtual Memory can be implemented via: a) Demand Paging b) Demand Segmentation Demand Paging: Basic Concept: When a process is to be swapped in, the pager guesses which pages will be used before the process is swapped out again. Instead of swapping in a whole process, the pager brings only those necessary pages into memory. Thus, it avoids reading into memory pages that will not be used anyway, decreasing the swap rime and the amount of physical memory needed.

22 Operating System 22 (ECS-501) Protection: With each page table entry a valid invalid bit is associated (v in-memory, i not-in-memory) Initially valid invalid bit is set to i on all entries During address translation, if valid invalid bit in page table entry is I page fault Hardware support/ Implementation : If there is a reference to a page, first reference to that page will trap to operating system: page fault 1. Operating system looks at another table to decide: Invalid reference abort Just not in memory 2. Get empty frame 3. Swap page into frame 4. Reset tables 5. Set validation bit = v Restart the instruction

23 Operating System 23 (ECS-501) Steps for handling page fault: 1. We check an internal table (usually kept with the process control block) for this process to determine whether the reference was a valid or an invalid memory access. 2. If the reference was invalid, we terminate the process. If it was valid, but we have not yet brought in that page, we now page it in. 3. We find a free frame (by taking one from the free-frame list, for example). 4. We schedule a disk operation to read the desired page into the newly allocated frame. 5. When the disk read is complete, we modify the internal table kept with the process and the page table to indicate that the page is now in memory. 6. We restart the instruction that was interrupted by the trap. The process can now access the page as though it had always been in memory. This operations can be summarized as: 1. Checking the address and finding a free frame or victim page (fast) 2. Swap out the victim page to secondary storage (slow) 3. Swap in the page from secondary storage (slow) 4. Context switch for the process and resume its execution (fast)

24 Operating System 24 (ECS-501) it can execute with no more faults. This scheme is pure demand paging: Never bring a page into memory until it is required Performance of Demand Paging: effective access time = (1 - p) * ma + p * page fault time. p= probability of a page fault ma = memory access time It is important to keep the page-fault rate low in a demand-paging system. Otherwise, the effective access time increases, slowing process execution dramatically.

25 Operating System 25 (ECS-501) Q. Write short note on Dirty Bit (modify bit). Ans. In order to reduce the page fault service time, a special bit called the dirty bit can be associated with each page. The dirty bit is set to "1" by the hardware whenever the page is modified. (written into). When we select a victim by using a page replacement algorithm, we examine its dirty bit. If it is set, that means the page has been modified since it was swapped in. In this case we have to write that page into the backing store. However if the dirty bit is reset, that means the page has not been modified since it was swapped in, so we don't have to write it into the backing store. The copy in the backing store is valid Page Replacement : A page replacement algorithm determines how the victim page (the page to be replaced) is selected when a page fault occurs. The aim is to minimize the page fault rate. The efficiency of a page replacement algorithm is evaluated by running it on a particular string of memory references and computing the number of page faults. Reference strings are either generated randomly, or by tracing the paging behavior of a system and recording the page number for each logical memory reference Basic page replacement: Find the location of the desired page on disk Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame Bring the desired page into the (newly) free frame; update the page and frame tables Restart the process

26 Operating System 26 (ECS-501) FIFO Page Replacement: This is a simple algorithm, and easy to implement. The idea is straight forward: choose the oldest page as the victim. Example Assume there are 3 frames, and consider the reference string 5, 7, 6, 0, 7, 1, 7, 2, 0, 1, 7, 1, 0. Show the content of memeory after each memory reference if FIFO page replacement algorithm is used. Find also the number of page faults 10 page faults are caused by FIFO. Belady s Anomaly Normally, one would expect that with the total number of frames increasing, the number of page faults decreases. However, for FIFO, there are cases where this generalization fails. This is called Belady s Anomaly. As an exercise consider the reference string below. Apply the FIFO method and find the number of page faults considering different number of frames. Then, examine whether the replacement suffer Belady s anomaly. Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

27 Operating System 27 (ECS-501) Least Recently Used (LRU): In this algorithm, the victim is the page that has not been used for the longest period. So, this algorithm makes us be rid of the considerations when no swapping occurs. The OS using this method, has to associate with each page, the time it was last used which means some extra storage. In the simplest way, the OS sets the reference bit of a page to "1" when it is referenced. This bit will not give the order of use but it will simply tell whether the corresponding frame is referenced recently or not. The OS resets all reference bits periodically. Example Assume there are 3 frames, and consider the reference string 5, 7, 6, 0, 7, 1, 7, 2, 0, 1, 7, 1, 0. Show the content of memory after each memory reference if LRU page replacement algorithm is used. Find also the number of page faults. This algorithm resulted in 9 page faults Optimal Page Replacement Algorithm (OPT) In this algorithm, the victim is the page which will not be used for the longest period. For a fixed number of frames, OPT has the lowest page fault rate between all of the page replacement algorithms, but there is problem for this algorithm. OPT is not possible to be implemented in practice. Because it requires future knowledge. However, it is used for performance comparison.

28 Operating System 28 (ECS-501) Example Assume we have 3 frames and consider the reference string below. Reference string: 5, 7, 6, 0, 7, 1, 7, 2, 0, 1, 7, 1, 0 Show the content of memory after each memory reference if OPT page replacement algorithm is used. Find also the number of page faults. this algorithm generates a page replacement scheme with 7 page faults Thrashing A process is thrashing if it is spending more time for paging in/out (due to frequent page faults) than executing. Thrashing causes considerable degradation in system performance. If a process does not have enough number of frames allocated to it, it will issue a page fault. A victim page must be chosen, but if all pages are in active use. So, the victim page selection and a new page replacement will be needed to be done in a very short time. This means another page fault will be issued shortly, and so on and so forth. Local replacement algorithms can limit the effects of thrashing. If the degree of multiprogramming is increased over a limit, processor utilization falls down considerably because of thrashing.

29 Operating System 29 (ECS-501) To prevent thrashing, we must provide a process as many frames as it needs. For this, a model called the working set model is developed which depends on the locality model of program execution Working Set Model: To prevent thrashing, we must provide a process as many frames as it needs. For this, we shall use the working set model, which depends on the locality model of program execution, discussed earlier. We shall use a parameter,, called the working set window size. We shall examine the last page references. The set of pages in the last page references shall be called the working set of a process. Example : Assume = 10, and consider the reference string given below, on which the window is shown at different time instants Working sets of this process at these time instants will be: WS(t1) = {2,1,5,7} WS(t2) = {7,5,1,3,4} WS(t3) = {3,4} Note that in calculating the working sets, we do not reduce consequent references to the same page to a single reference. Choice of is crucial. If is to small, it will not cover the entire working set. If it is too large, several localities of a process may overlap. Now, compute the WS size (WSS) for each process, and find the total demand, D of the system at that instance in time, as the summation of all the WS sizes.

30 Operating System 30 (ECS-501) If the number of frames is n, then a. If D > n, the system is thrashing. b. If D < n, the system is all right, the degree of multiprogramming can possibly be increased. 1.5 Cache Memory Organization Small amount of fast memory Placed between the processor and main memory Located either on the processor chip or on a separate module Cache Operation Overview Processor requests the contents of some memory location The cache is checked for the requested data o If found, the requested word is delivered to the processor o If not found, a block of main memory is first read into the cache, then the requested word is delivered to the processor. When a block of data is fetched into the cache to satisfy a single memory reference, it is likely that there will be future references to that same memory location or to other words in the block locality or reference principle. Each block has a tag added to identify it.

31 Operating System 31 (ECS-501) An example of a typical cache organization is shown below: 1.6 Locality of Reference principle Memory references by the processor, for both data and instructions, cluster Programs contain iterative loops and subroutines - once a loop or subroutine is entered, there are repeated references to a small set of instructions Operations on tables and arrays involve access to a clustered set of data word

32 Operating System 32 (ECS-501) Q. Explain Structure of the Page Table OR Types of Page Table. Ans. Structure of the page table: A. Hierarchical Paging: It Break up the logical address space into multiple page tables. A simple technique is a two-level page table (for example): A logical address (on 32-bit machine with 1K page size) is divided into: a page number consisting of 22 bits a page offset consisting of 10 bits Since the page table is paged, the page number is further divided into: a 12-bit page number a 10-bit page offset Thus, a logical address is as follows: Two-level page table Scheme where p i is an index into the outer page table, and p 2 is the displacement within the page of the outer page table Address Translation Scheme:

33 Operating System 33 (ECS-501) B. Hashed Page Table: A common approach for handling address space larger than 32 bits is to use a hashed page table. The virtual page number is hashed into a page table. This page table contains a chain of elements hashing to the same location. Virtual page numbers are compared in this chain searching for a match. If a match is found, the corresponding physical frame is extracted. C. Inverted Page table: An inverted page table has one entry for each real page (or frame for memory). Each entry consists of the virtual address of the page stored in that real memory location. Each virtual address in the system consists of : <process-id, page-number, offset>

34 Operating System 34 (ECS-501) Tutorial sheet (Memory Management) 1. A process has four page frames allocated to it. The time of the last loading of a page into each page frame, the time of last access to the page, the virtual number in each frame and Referenced (R) and modified (M) bits for each page frame are shown in table below. Times are in clock ticks from the process start-time at time 0. Virtual Page Frame # Time Loaded Time M R # Referenced A page fault to virtual page 4 has occurred. Which page frame will have its contents replaced under each of the following replacement algorithms? (i) FIFO (ii) LRU Give reason in support of your answer. Also explain the working of these algorithms. 2. On a system using demand-paged memory, it takes 120 ns to satisfy a memory request if the page is in memory. If the page is not in memory, the request takes on an average 5ms. What would the page fault rate to be achieved an effective access time of 1 micro-sec? Assume the system is running only a single process and the CPU is idle during page swaps. 3. A system using demand-paged memory, takes 250 ns to satisfy a memory request if the page is in memory. If the page is not in memory, the request takes on an average 5ms if a free frame is available or the page to be swapped out has been modified or 12 ms if the page to be swapped out has been modified. What is the effective access time if the page fault rate is 2%, and 40% of the time the page to be replaced has been modified? Assume the system is running only a single process and the CPU is idle during page swaps. 4. On a system using a disk cache, the mean access time is 41.2 ms, the mean cache access time is 2ms, the mean disk access time is 100 ms and the system has 8MB of cache memory. For each

35 Operating System 35 (ECS-501) doubling of the amount of memory, the miss rate is halved. How much memory must be added to reduce the mean access time to 20 ms? Assume the amount of memory may only increase by doubling. 5. On a system using paging and segmentation, the virtual address space consists of upto 8 segments where each segment can be upto 2 29 bytes long. The hardware pages each segment into 256-bytes pages. Determine the bits needed in the virtual address to specify the (i)segment number (ii)page Number (iii)offset with page (iv)entire virtual address 6. How many page faults occur for optimal page reaplacement algorithm with following reference string for four page frames: 1,2,3,4,5,3,4,1,6,7,8,7,8,9,7,8,9,5,4,5,4,2

Chapter 9: Memory Management. Background

Chapter 9: Memory Management. Background 1 Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging 9.1 Background Program must be brought into memory and placed within a process for

More information

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts Memory management Last modified: 26.04.2016 1 Contents Background Logical and physical address spaces; address binding Overlaying, swapping Contiguous Memory Allocation Segmentation Paging Structure of

More information

Memory Management. Memory Management

Memory Management. Memory Management Memory Management Gordon College Stephen Brinton Memory Management Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging 1 Background Program must be brought into memory

More information

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management Part Four - Memory Management 8.1 Background Chapter 8: Memory-Management Management Strategies Program must be brought into memory and placed within a process for it to be run Input queue collection of

More information

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

Part Three - Memory Management. Chapter 8: Memory-Management Strategies Part Three - Memory Management Chapter 8: Memory-Management Strategies Chapter 8: Memory-Management Strategies 8.1 Background 8.2 Swapping 8.3 Contiguous Memory Allocation 8.4 Segmentation 8.5 Paging 8.6

More information

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition Chapter 7: Main Memory Operating System Concepts Essentials 8 th Edition Silberschatz, Galvin and Gagne 2011 Chapter 7: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure

More information

Chapter 8: Main Memory

Chapter 8: Main Memory Chapter 8: Main Memory Operating System Concepts 8 th Edition,! Silberschatz, Galvin and Gagne 2009! Chapter 8: Memory Management Background" Swapping " Contiguous Memory Allocation" Paging" Structure

More information

Chapter 8: Main Memory

Chapter 8: Main Memory Chapter 8: Main Memory Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium 8.2 Silberschatz, Galvin

More information

Module 8: Memory Management

Module 8: Memory Management Module 8: Memory Management Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging 8.1 Background Program must be brought into memory

More information

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory

Module 9: Memory Management. Background. Binding of Instructions and Data to Memory Module 9: Memory Management Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging 9.1 Background Program must be brought into memory

More information

Chapter 8: Memory-Management Strategies

Chapter 8: Memory-Management Strategies Chapter 8: Memory-Management Strategies Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and

More information

Chapter 8: Memory Management

Chapter 8: Memory Management Chapter 8: Memory Management Chapter 8: Memory Management Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging 8.2 Background Program must be brought into memory and placed

More information

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Chapter 8 & Chapter 9 Main Memory & Virtual Memory Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array

More information

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY MANAGEMENT Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the

More information

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES OBJECTIVES Detailed description of various ways of organizing memory hardware Various memory-management techniques, including paging and segmentation To provide

More information

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition Chapter 8: Memory- Management Strategies Operating System Concepts 9 th Edition Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation

More information

Chapter 8: Memory Management Strategies

Chapter 8: Memory Management Strategies Chapter 8: Memory- Management Strategies, Silberschatz, Galvin and Gagne 2009 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table

More information

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed. CHAPTER 8: MEMORY MANAGEMENT By I-Chen Lin Textbook: Operating System Concepts 9th Ed. Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the

More information

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time Memory Management To provide a detailed description of various ways of organizing memory hardware To discuss various memory-management techniques, including paging and segmentation To provide a detailed

More information

Module 8: Memory Management

Module 8: Memory Management Module 8: Memory Management Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging Operating System Concepts 8.1 Silberschatz and Galvin

More information

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY MANAGEMENT Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of

More information

CS6401- Operating System UNIT-III STORAGE MANAGEMENT

CS6401- Operating System UNIT-III STORAGE MANAGEMENT UNIT-III STORAGE MANAGEMENT Memory Management: Background In general, to rum a program, it must be brought into memory. Input queue collection of processes on the disk that are waiting to be brought into

More information

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Chapter 8: Main Memory. Operating System Concepts 9 th Edition Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel

More information

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.

Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses. 1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and

More information

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy

Operating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference

More information

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management

ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part I: Operating system overview: Memory Management 1 Hardware background The role of primary memory Program

More information

Chapter 8: Main Memory

Chapter 8: Main Memory Chapter 8: Main Memory Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and 64-bit Architectures Example:

More information

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1

Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1 Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1 Chapter 9: Memory Management Background Swapping Contiguous Memory Allocation Segmentation

More information

CS307: Operating Systems

CS307: Operating Systems CS307: Operating Systems Chentao Wu 吴晨涛 Associate Professor Dept. of Computer Science and Engineering Shanghai Jiao Tong University SEIEE Building 3-513 wuct@cs.sjtu.edu.cn Download Lectures ftp://public.sjtu.edu.cn

More information

Chapter 8: Main Memory

Chapter 8: Main Memory Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel

More information

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science

Virtual Memory. CSCI 315 Operating Systems Design Department of Computer Science Virtual Memory CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those from an earlier edition of the course text Operating

More information

Chapter 8: Memory- Manage g me m nt n S tra r t a e t gie i s

Chapter 8: Memory- Manage g me m nt n S tra r t a e t gie i s Chapter 8: Memory- Management Strategies Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium 2009/12/16

More information

Logical versus Physical Address Space

Logical versus Physical Address Space CHAPTER 8: MEMORY MANAGEMENT Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging Operating System Concepts, Addison-Wesley 1994

More information

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition Chapter 8: Memory- Management Strategies Operating System Concepts 9 th Edition Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation

More information

Chapter 8: Memory- Management Strategies

Chapter 8: Memory- Management Strategies Chapter 8: Memory Management Strategies Chapter 8: Memory- Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and

More information

SHANDONG UNIVERSITY 1

SHANDONG UNIVERSITY 1 Chapter 8 Main Memory SHANDONG UNIVERSITY 1 Contents Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium SHANDONG UNIVERSITY 2 Objectives

More information

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science Memory Management CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture are based on those from Operating Systems Concepts, 9th ed., by Silberschatz, Galvin,

More information

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2 6 - Main Memory EECE 315 (101) ECE UBC 2013 W2 Acknowledgement: This set of slides is partly based on the PPTs provided by the Wiley s companion website (including textbook images, when not explicitly

More information

Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation

Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Basic Hardware Address Binding Logical VS Physical Address Space Dynamic Loading Dynamic Linking and Shared

More information

Chapters 9 & 10: Memory Management and Virtual Memory

Chapters 9 & 10: Memory Management and Virtual Memory Chapters 9 & 10: Memory Management and Virtual Memory Important concepts (for final, projects, papers) addressing: physical/absolute, logical/relative/virtual overlays swapping and paging memory protection

More information

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University Main Memory CISC3595, Spring 2015 X. Zhang Fordham University 1 Memory Management! Background!! Contiguous Memory Allocation!! Paging!! Structure of the Page Table!! Segmentation!! Example: The Intel Pentium

More information

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1

Main Memory. Electrical and Computer Engineering Stephen Kim ECE/IUPUI RTOS & APPS 1 Main Memory Electrical and Computer Engineering Stephen Kim (dskim@iupui.edu) ECE/IUPUI RTOS & APPS 1 Main Memory Background Swapping Contiguous allocation Paging Segmentation Segmentation with paging

More information

Chapter 8 Memory Management

Chapter 8 Memory Management Chapter 8 Memory Management Da-Wei Chang CSIE.NCKU Source: Abraham Silberschatz, Peter B. Galvin, and Greg Gagne, "Operating System Concepts", 9th Edition, Wiley. 1 Outline Background Swapping Contiguous

More information

Goals of Memory Management

Goals of Memory Management Memory Management Goals of Memory Management Allocate available memory efficiently to multiple processes Main functions Allocate memory to processes when needed Keep track of what memory is used and what

More information

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure. File Systems I/O Management Hard Drive Management Virtual Memory Swap Memory Management Storage and I/O Introduction CSI3131 Topics Process Management Computing Systems Memory CPU Peripherals Processes

More information

Memory Management. Contents: Memory Management. How to generate code? Background

Memory Management. Contents: Memory Management. How to generate code? Background TDIU11 Operating systems Contents: Memory Management Memory Management [SGG7/8/9] Chapter 8 Background Relocation Dynamic loading and linking Swapping Contiguous Allocation Paging Segmentation Copyright

More information

CS307 Operating Systems Main Memory

CS307 Operating Systems Main Memory CS307 Main Memory Fan Wu Department of Computer Science and Engineering Shanghai Jiao Tong University Spring 2018 Background Program must be brought (from disk) into memory and placed within a process

More information

Lecture 8 Memory Management Strategies (chapter 8)

Lecture 8 Memory Management Strategies (chapter 8) Bilkent University Department of Computer Engineering CS342 Operating Systems Lecture 8 Memory Management Strategies (chapter 8) Dr. İbrahim Körpeoğlu http://www.cs.bilkent.edu.tr/~korpe 1 References The

More information

Chapter 9 Memory Management

Chapter 9 Memory Management Contents 1. Introduction 2. Computer-System Structures 3. Operating-System Structures 4. Processes 5. Threads 6. CPU Scheduling 7. Process Synchronization 8. Deadlocks 9. Memory Management 10. Virtual

More information

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems Instructor: Dr. Turgay Korkmaz Department Computer Science The University of Texas at San Antonio Office: NPB

More information

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition

Chapter 8: Memory Management. Operating System Concepts with Java 8 th Edition Chapter 8: Memory Management 8.1 Silberschatz, Galvin and Gagne 2009 Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are

More information

Chapter 8: Main Memory. Operating System Concepts 8th Edition

Chapter 8: Main Memory. Operating System Concepts 8th Edition Chapter 8: Main Memory Operating System Concepts 8th Edition Silberschatz, Galvin and Gagne 2009 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure of the Page

More information

Chapter 8 Main Memory

Chapter 8 Main Memory COP 4610: Introduction to Operating Systems (Spring 2014) Chapter 8 Main Memory Zhi Wang Florida State University Contents Background Swapping Contiguous memory allocation Paging Segmentation OS examples

More information

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1

Basic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1 Basic Memory Management Program must be brought into memory and placed within a process for it to be run Basic Memory Management CS 256/456 Dept. of Computer Science, University of Rochester Mono-programming

More information

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization

Memory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization Requirements Relocation Memory management ability to change process image position Protection ability to avoid unwanted memory accesses Sharing ability to share memory portions among processes Logical

More information

Memory Management. Dr. Yingwu Zhu

Memory Management. Dr. Yingwu Zhu Memory Management Dr. Yingwu Zhu Big picture Main memory is a resource A process/thread is being executing, the instructions & data must be in memory Assumption: Main memory is infinite Allocation of memory

More information

Basic Memory Management

Basic Memory Management Basic Memory Management CS 256/456 Dept. of Computer Science, University of Rochester 10/15/14 CSC 2/456 1 Basic Memory Management Program must be brought into memory and placed within a process for it

More information

Principles of Operating Systems

Principles of Operating Systems Principles of Operating Systems Lecture 18-20 - Main Memory Ardalan Amiri Sani (ardalan@uci.edu) [lecture slides contains some content adapted from previous slides by Prof. Nalini Venkatasubramanian, and

More information

Main Memory (Part I)

Main Memory (Part I) Main Memory (Part I) Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) Main Memory 1393/8/5 1 / 47 Motivation and Background Amir

More information

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit

Performance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit Memory Management All data in memory before and after processing All instructions in memory in order to execute Memory management determines what is to be in memory Memory management activities Keeping

More information

Background. Contiguous Memory Allocation

Background. Contiguous Memory Allocation Operating System Lecture 8 2017.5.9 Chapter 8 (Main Memory) Background Swapping Contiguous Memory Allocation Segmentation - Paging Memory Management Selection of a memory-management method for a specific

More information

Main Memory Yi Shi Fall 2017 Xi an Jiaotong University

Main Memory Yi Shi Fall 2017 Xi an Jiaotong University Main Memory Yi Shi Fall 2017 Xi an Jiaotong University Goals Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Background Program must be brought (from disk)

More information

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition

Chapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging

Chapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging Chapter 8: Memory Management Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging 1 Background Memory management is crucial in better utilizing one of the most important

More information

CS 3733 Operating Systems:

CS 3733 Operating Systems: CS 3733 Operating Systems: Topics: Memory Management (SGG, Chapter 08) Instructor: Dr Dakai Zhu Department of Computer Science @ UTSA 1 Reminders Assignment 2: extended to Monday (March 5th) midnight:

More information

UNIT-III VIRTUAL MEMORY

UNIT-III VIRTUAL MEMORY MEMORY MANAGEMENT: The main purpose of a computer system is to execute programs. These programs, together with the data they access, must be at least partially in main memory during execution. To improve

More information

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01

Unit-03 Deadlock and Memory Management Unit-03/Lecture-01 1 Unit-03 Deadlock and Memory Management Unit-03/Lecture-01 The Deadlock Problem 1. A set of blocked processes each holding a resource and waiting to acquire a resource held by another process in the set.

More information

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory Chapter 9: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

Memory Management. Memory

Memory Management. Memory Memory Management These slides are created by Dr. Huang of George Mason University. Students registered in Dr. Huang s courses at GMU can make a single machine readable copy and print a single copy of

More information

Memory Management and Protection

Memory Management and Protection Part IV Memory Management and Protection Sadeghi, Cubaleska RUB 2008-09 Course Operating System Security Memory Management and Protection Main Memory Virtual Memory Roadmap of Chapter 4 Main Memory Background

More information

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory Multi-Process Systems: Memory (2) What we will learn A detailed description of various ways of organizing memory Discuss various memory-management techniques, including paging and segmentation To provide

More information

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358

Memory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358 Memory Management Reading: Silberschatz chapter 9 Reading: Stallings chapter 7 1 Outline Background Issues in Memory Management Logical Vs Physical address, MMU Dynamic Loading Memory Partitioning Placement

More information

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time. Memory Management To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time. Basic CPUs and Physical Memory CPU cache Physical memory

More information

VII. Memory Management

VII. Memory Management VII. Memory Management 1 Intended Schedule Date Lecture Hand out Submission 0 20.04. Introduction to Operating Systems Course registration 1 27.04. Systems Programming using C (File Subsystem) 1. Assignment

More information

Chapter 9: Virtual-Memory

Chapter 9: Virtual-Memory Chapter 9: Virtual-Memory Management Chapter 9: Virtual-Memory Management Background Demand Paging Page Replacement Allocation of Frames Thrashing Other Considerations Silberschatz, Galvin and Gagne 2013

More information

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium

Outlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium Main Memory Outlook Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium 2 Backgound Background So far we considered how to share

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2018 L20 Virtual Memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 Questions from last time Page

More information

Chapter 8 Main Memory

Chapter 8 Main Memory Chapter 8 Main Memory 8.1, 8.2, 8.3, 8.4, 8.5 Chapter 9 Virtual memory 9.1, 9.2, 9.3 https://www.akkadia.org/drepper/cpumemory.pdf Images from Silberschatz Pacific University 1 How does the OS manage memory?

More information

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition

Chapter 9: Virtual Memory. Operating System Concepts 9 th Edition Chapter 9: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.

Memory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is. Chapter 4 Memory Management Ideally programmers want memory that is Memory Management large fast non volatile 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms

More information

Memory Management. Dr. Yingwu Zhu

Memory Management. Dr. Yingwu Zhu Memory Management Dr. Yingwu Zhu Big picture Main memory is a resource A process/thread is being executing, the instructions & data must be in memory Assumption: Main memory is super big to hold a program

More information

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France

Operating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France Operating Systems Memory Management Mathieu Delalandre University of Tours, Tours city, France mathieu.delalandre@univ-tours.fr 1 Operating Systems Memory Management 1. Introduction 2. Contiguous memory

More information

Chapter 8: Virtual Memory. Operating System Concepts

Chapter 8: Virtual Memory. Operating System Concepts Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2009 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2018 L17 Main Memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Was Great Dijkstra a magician?

More information

Operating Systems. Paging... Memory Management 2 Overview. Lecture 6 Memory management 2. Paging (contd.)

Operating Systems. Paging... Memory Management 2 Overview. Lecture 6 Memory management 2. Paging (contd.) Operating Systems Lecture 6 Memory management 2 Memory Management 2 Overview Paging (contd.) Structure of page table Shared memory Segmentation Segmentation with paging Virtual memory Just to remind you...

More information

Operating System - Virtual Memory

Operating System - Virtual Memory Operating System - Virtual Memory Virtual memory is a technique that allows the execution of processes which are not completely available in memory. The main visible advantage of this scheme is that programs

More information

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015 Operating Systems 09. Memory Management Part 1 Paul Krzyzanowski Rutgers University Spring 2015 March 9, 2015 2014-2015 Paul Krzyzanowski 1 CPU Access to Memory The CPU reads instructions and reads/write

More information

P r a t t hr h ee e : e M e M m e o m r o y y M a M n a a n g a e g m e e m n e t 8.1/72

P r a t t hr h ee e : e M e M m e o m r o y y M a M n a a n g a e g m e e m n e t 8.1/72 Part three: Memory Management programs, together with the data they access, must be in main memory (at least partially) during execution. the computer keeps several processes in memory. Many memory-management

More information

CS370 Operating Systems

CS370 Operating Systems CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2016 Lecture 32 Virtual Memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 Questions for you What is

More information

Part-A QUESTION BANK UNIT-III 1. Define Dynamic Loading. To obtain better memory-space utilization dynamic loading is used. With dynamic loading, a routine is not loaded until it is called. All routines

More information

CS3600 SYSTEMS AND NETWORKS

CS3600 SYSTEMS AND NETWORKS CS3600 SYSTEMS AND NETWORKS SPRING 2013 Lecture 13: Paging Prof. Alan Mislove (amislove@ccs.neu.edu) Paging Physical address space of a process can be noncontiguous; process is allocated physical memory

More information

Chapter 4 Memory Management. Memory Management

Chapter 4 Memory Management. Memory Management Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms 4.6 Design issues for paging systems 4.7

More information

Memory Management Minsoo Ryu Real-Time Computing and Communications Lab. Hanyang University

Memory Management Minsoo Ryu Real-Time Computing and Communications Lab. Hanyang University Memory Management Minsoo Ryu Real-Time Computing and Communications Lab. Hanyang University msryu@hanyang.ac.kr Topics Covered Introduction Memory Allocation and Fragmentation Address Translation Paging

More information

Roadmap. Tevfik Koşar. CSC Operating Systems Spring Lecture - XII Main Memory - II. Louisiana State University

Roadmap. Tevfik Koşar. CSC Operating Systems Spring Lecture - XII Main Memory - II. Louisiana State University CSC 4103 - Operating Systems Spring 2007 Lecture - XII Main Memory - II Tevfik Koşar Louisiana State University March 8 th, 2007 1 Roadmap Dynamic Loading & Linking Contiguous Memory Allocation Fragmentation

More information

Chapter 4 Memory Management

Chapter 4 Memory Management Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms 4.6 Design issues for paging systems 4.7

More information

MEMORY MANAGEMENT/1 CS 409, FALL 2013

MEMORY MANAGEMENT/1 CS 409, FALL 2013 MEMORY MANAGEMENT Requirements: Relocation (to different memory areas) Protection (run time, usually implemented together with relocation) Sharing (and also protection) Logical organization Physical organization

More information

Virtual Memory. Chapter 8

Virtual Memory. Chapter 8 Virtual Memory 1 Chapter 8 Characteristics of Paging and Segmentation Memory references are dynamically translated into physical addresses at run time E.g., process may be swapped in and out of main memory

More information

9.1 Background. In Chapter 6, we showed how the CPU can be shared by a set of processes. As a result of

9.1 Background. In Chapter 6, we showed how the CPU can be shared by a set of processes. As a result of Chapter 9 MEMORY MANAGEMENT In Chapter 6, we showed how the CPU can be shared by a set of processes. As a result of CPU scheduling, we can improve both the utilization of the CPU and the speed of the computer's

More information

CS450/550 Operating Systems

CS450/550 Operating Systems CS450/550 Operating Systems Lecture 4 memory Palden Lama Department of Computer Science CS450/550 Memory.1 Review: Summary of Chapter 3 Deadlocks and its modeling Deadlock detection Deadlock recovery Deadlock

More information

CS399 New Beginnings. Jonathan Walpole

CS399 New Beginnings. Jonathan Walpole CS399 New Beginnings Jonathan Walpole Memory Management Memory Management Memory a linear array of bytes - Holds O.S. and programs (processes) - Each cell (byte) is named by a unique memory address Recall,

More information