Chapter 8 Memory Management
|
|
- Joan Fletcher
- 5 years ago
- Views:
Transcription
1 1 Chapter 8 Memory Management The technique we will describe are: 1. Single continuous memory management 2. Partitioned memory management 3. Relocatable partitioned memory management 4. Paged memory management 5. Demand-paged memory management 6. Segmented memory management 7. Segmented and demand-paged memory management 8. Other memory management schemes swapping overlays Single Contiguous Memory Management O.S. used by job wasted Available Hardware Support: no special hardware is required Advantage: - small O.S. - easy to understand or use such a system Disadvantage: poorly memory utilization - some memory is not being used at all (wasted area) - no multiprogramming (poor CPU utilization) - load entire program even some portions have information that is never accessed, e.g. error routine - job need memory more than available can not be run
2 2 Partitioned Memory Management O.S. Job1 Job2 Job 3 partition 1 partition 2 partition 3 unassigned space Hardware Support: little special hardware is needed to prevent one job from disrupting either O.S. or other jobs. Twos bound registers may be used to bracket the partition being used. If the job tried to access memory outside the partition, a protection interrupt would occur. Software Algorithm: 1. Static Partition, memory is divided into partitions prior to the processing of a jobs. Example. partition number size location status 1 8K 312K in use 2 32K 320K in use 3 32K 352K not in use 4 120K 384K not in use 5 520K 504K in use
3 3 0 O.S. 312K Job 1 (1K) 8K 320K Job 2 (9K) 32K 352K 32K 384K separated free area 120K 504K Job 3 (121K) 520K wasted space partition number size job size wasted space 1 8K 1K 7K 2 32K 9K 23K 3 32K - 32K 4 120K - 120K 5 520K 121K 399K 712K 131K 581K Only 131K of available 712K is actually used. Thus, over 81 percent of the available memory is wasted. 2. Dynamic Partition, partitions are created during job processing so as to match partition size to job sizes. Example. allocated partition status table partition number size location status 1 8K 312K allocated 2 32K 320K allocated empty entry 4 120K 384K allocated empty entry
4 4 unallocated area status table free area size location status 1 32K 352K available 2 520K 504K available empty entry empty entry Job4 0 (24K) 0 0 O.S. O.S. O.S. Job 5 312K (128K) 312K 312K Job 1 Job 1 Job 1 (8K) Job 6 (8K) (8K) 320K (256K) 320K 320K Job 2 Job 2 free (32K) (32K) (32K) 352K 352K 352K Free Job 4 (24K) Job 4 (24K) (32K) 376K free (8K) 376K 384K 384K Job 3 Job 3 free (120K) (120K) (128K) 504K 504K 504K free Job 5 Job 5 (520K) (128K) (128K) 632K 632K Job 6 Job 6 (256K) (256K) 888K 888K free free (136K) (136K) 1024K 1024K 1024K a) initial state b) jobs 4, 5, and 6 allocated c) jobs 2, and 3 terminated
5 5 Various algorithms are available to accomplish the allocation and deallocation functions. 1. First Fit Partition Algorithm Free table is kept sorted by location. When it is necessary to allocate a partition, we start at the free area at the lowest memory address and keep looking until finding the first free area big enough for the partition to fit. There are two major advantages: - the question Is partition adjacent to any free area? In the deallocation algorithm can usually be answered by searching only half the table. - using free areas at low memory address tends to allow a large free area to form at the high memory address. There is a good chances of finding a large enough free area. 2. Best Fit Partition Algorithm Free table is kept sorted by size (small-> large). The first free area that large enough for desired partition is the best fit. There are three major advantages: - free area can be found by searching only half the table - if there is a free area of exactly the desired size, it will be selected. This is not necessary true for first fit. - if there is no free area of exactly the desired size, the partition is carved out of the smallest possible free area and does not destroy a large free area. However, there is a major disadvantage. If the free area is usually not exactly the right size and must be split into two pieces. The free area is often quite small, almost worthless. 3. Worst Fit Partition Algorithm Free table is kept sorted by size from large to small. Allocated the largest hole first. Conclusion of Partitioned Memory Management Advantage: 1. support multiprogramming, more memory, I/O, and CPU utilization 2. requires no special costly hardware 3. simple and easy to implement Disadvantage: 1. fragmentation can be a significant problem. Fragmentation is the development of a large number of separate free areas (i.e. the total free memory is fragmented into small pieces). 2. even if memory is not fragmented, the single free area may not be large enough for a partition. 3. it does require more memory than a single contiguous memory management (as well as a more complex O.S.) in order to hold more than one job. 4. memory may contain information that is never used such as error routine. 5. a partition size is limited to the size of physical memory.
6 6 Relocatable Partitioned Memory Management An obvious solution to the fragmentation problem is to periodically combine all free areas into one contiguous area. This can be done by moving the contents of all allocated partitions. So that they become contiguous. This process is called compaction (or recompaction, since it is done many times). Example. Job 7 256K O.S. O.S. O.S. 312K 312K 312K Job 1 (8K) Job 1 (8K) Job 1 (8K) 320K 320K 320K free (32K) Job 4 (24K) Job 4 (24K) 344K 344K 352K Job 4 (24K) Job 5 (128K) Job 5 (128K) 376K free (128K) 472K 472K 504K Job 5 (128K) Job 6 (256K) Job 6 (256K) 632K Job 6 (256K) 728K 728K 888K free (296K) Job 7 (256K) free (136K) 984K free (40K) 1024K 1024K 1024K a) initial state b) after compaction c) after allocating partition for job 7 Although it is conceptually simple, moving a job s partition doesn t guarantee that the job will still run correctly at its new location. Several hardware techniques have been developed to cope with this relocation problem. Most of them are based on the concept of a mapped memory, that is, the address space seen by a job is not necessarily the same as the physical memory address used. Hardware Support: use base-bounds relocation registers (base relocation and bound register) to do dynamic relocation (relocation at execution time)
7 7 Example. 352K O.S. 352K effective relocation load 1, address register load 1, K Job s 4 address space a) before relocation 376K 1024K Physical Memory 320K O.S. 352K effective relocation load 1, address register load 1, K Job s 4 address space b) after relocation 344K 1024K Physical Memory Since the starting point of a job s address space is unrelated to the physical location of the partition. It is useful to start the address space at location 0. If this convention is used. All relocation
8 8 registers values will be positive. Furthermore, protection is accomplished, since the hardware will detect any effective address that is negative or exceeds the value of the bounds register. Example. bound O.S. register 352K 24K 0 effective relocation load 1,9800 address register 352 load 1, K K 376K address space 1024K Physical Memory Advantage: - eliminates fragmentation - increase degree of multiprogramming, increase memory and CPU utilization Disadvantage: - relocation hardware increases the cost of the computer and may slow down the speed - compaction time may be substantial - some memory will still be unused because even through it is compacted. The amount of free area may be less than the needed partition size - memory may contain information that is never used such as error routine - a job s partition size is limited to the size of physical memory Page Memory Management Another possible solution to the external fragmentation problem is to permit the logical address space of a process to be noncontiguous, thus allowing a process to be allocated physical memory wherever the latter is available. Each job s address space is divided into equal pieces, called pages, and, likewise, physical memory is divided into pieces of the same size, called blocks or frames. Then, by providing a suitable hardware mapping facility, any page can be placed into any block. The pages remain
9 9 logically contiguous (i.e., to the user program) but the corresponding block are not necessarily contiguous. Hardware Support: For the hardware to perform the mapping from address space to physical memory, there must be a separate register for each page; these registers are often called page maps or Page Map Table (PMTs). They may be either special hardware registers or a reserved section of memory. Every address generated by the CPU is divided into two parts: a page number (p) and a page offset (d). The page number is used as an index into a page table. The page table contains the block number which is the base address of each page in physical memory. This base address is combined with the page offset to define the physical memory address that is sent to the memory unit. Example. logical address physical address CPU p d f d p physical memory f Page Map Table The size of a page is typically a power of 2. The selection of a power of 2 as a page size makes the translation of a logical address into a page number and page offset particularly easy. If the size of logical address space is 2 m, and a page size is 2 n addressing units (bytes or words), then the highorder m n bits of a logical address designate the page number, and the n low-order bits designate the page offset. m n bits n bits page number page offset p d Example.
10 m = 11, n = n = n = Example. 0 page block O.S Job 1 PMT 2518 load 1,2108 Job 2 page free 518 load 1, Job 2 page Job 1 page Job 1 page 1 Job 2 PMT Job 2 page Job 3 page 0 Job 3 PMT 9000 free 10000
11 11 PMT The hardware implementation of the page table can be done in a number of different ways: - as a set of dedicated registers if the page table is reasonably small (for example 256 entries) Since only one job is running at a time, only one set of hardware page mapping registers will be needed. These registers must be saved and reset whenever the processor is switched to a new job. CPU dispatcher reloads these registers just as it reloads the other registers. - kept in main memory, and use a Page Map Table Address Register (PMTAR) points to the page table. So, changing page table requires only changing this one register. Example. Job Table job number size location of PMT status 1 8K 3600 allocated 2 12K 4160 allocated 3 4K 3820 allocated empty entry page number block number job 3 s PMT job 1 s PMT 2 7 Memory Block Table job 2 s PMT block status 0 os 1 os 2 job 2 3 available 4 job 2 5 job 1 6 job 1 7 job2 8 job 3 9 available
12 12 PMTAR job 1 s PMT length address 6 block job 3 s PMT 4K job 2 s PMT job 2 s PMT block load 1,8300 8K block 2 (job 2, page 0) 4K 8710 load 1,8300 8K 12K block K 16K block 4 (job 2, page 1) job 2 s address space 20K block 5 24K block 6 28K block 7 (job2, page 3) K block 8 36K block 9 40K physical memory - hybrid page map table is the standard solution. It use a special, small, fast-lookup hardware cache, called associative registers or Translation Look-aside Buffer (TLB). Each associative register consists of two parts: a key and a value. When the associative registers are presented with an item, it is compared with all keys simultaneously. If the item is found, the corresponding value field is output. The search is fast. However the
13 13 hardware is expensive. Typically, the number of entries in a TLB varies between 8 and The TLB contains only a few of the page-table entries. When a logical address is generated by the CPU, its page number is presented to a set of TLB that contain page numbers and their corresponding block numbers. If the page number is found in the TLB, its frame number is immediately available and used to access memory. If the page number is not in the TLB, a memory reference to the page table must be made. When the block number is obtained, we can use it to access memory. In addition, we add the page number and frame number to the TLB, so that they will be found quickly on the next reference. If the TLB is already full of entries, the operating system must select one for replacement. logical address CPU p d page # block #. TLB hit TLB f d miss p f physical memory PMT Note! Every time a new page table is selected (for instance, for each context switch), the TLB must be flushed (erased) to ensure that the next execution process doesn t use the wrong information. Advantage: - eliminates external fragmentation - increased memory and CPU utilization - compaction overhead require for the relocatable partition scheme is also eliminated Disadvantage - page address mapping hardware increase the cost of computer and slow down the processor
14 14 - processor overhead time must be expended to maintain and update PMT and various table - still have internal fragmentation problem - some memory will still be unused if the number of available blocks is not sufficient for the address space of the jobs to be run - memory contains information that is seldom used such as error routine - job s address space is limited to the size of physical memory Demand-Paged Memory Management In all the previous schemes a job could not be run until there was sufficient available memory to load its entire address space. These problems could be resolved by using extremely large main memories. At present this approach, althrough simple, is not usually economically feasible. Another approach is to use the operating system to produce the illusion of an extremely large memory. Since this large memory is merely an illusion, it is called virtual memory. Example. page status block y O.S y job 2 page job 4 page 0 job job 2 page1 0 0 y job 1 page y job 1 page y job 2 page job 3 page 0 job job 4 page y physical memory job y y n n job 4 PMT
15 15 If the address mapping hardware encounters a page table entry with status = N, it generates a page interrupt (or page fault). The operating system must process this interrupt by loading the required page and adjusting the page table entries correspondingly. Once memory has become filled with pages it is possible to load another page only by first removing one of the pages presently in memory. This requires a technique called variously page swapping, page removal, page replacement, page turning, or page cannibalizing. The replaced page is copied back onto the secondary storage device before the new page is loaded (i.e., the two pages swap places between memory and secondary storage). Removing a page from memory and then immediately needing it again due to a page fault referencing that page would be very unfortunate. The phenomenon of excessively moving pages back and forth between memory and secondary storage has been called trashing, since it consumes a lot of the computer s energy but accomplishes very little useful results. Hardware Support: The address mapping hardware via Page Map Table, as described in Page Memory Management. Three key additions to the hardware are required: 1. A status bit in the PMT to indicate whether the page is in main memory or secondary storage. 2. Interrupt action to transfer control to the operating system if the job attempts to access a page not in main memory. 3. Record of individual page usage to assist the operating system in determining which page to remove, if necessary. Software Algorithm: Demand-Paged Memory Management must interact with information (file) management to access and store copies of the job s address space on secondary storage. The relationship of File Map Table to the Page Map Table and the Memory Block Table is illustrated below:
16 16 Page Map Table Memory Block Table Address Register block status 0 0 os os 1 os 4K 2 job 2, page 0 3 available 8K 4 job 2, page load 1,8300 (job 2, page 0) 5 job 1, page 0 12K 6 job 1, page 1 (available) 7 available 16K 8 job 3, page 0 (job 2, page 1) 9 available 20K File Map Table Page Map Table (job 1, page 0) (job 2) (job 2) 24K page file address page block I (job 1, page 1) K available K (job 3, page 0) 36K 0 (available) 518 load 1,8300 page 0 40K 4K physical memory page 1 8K page 2 12K secondary storage device
17 17 start processing instruction generate data address compute page no. advance to next instruction Hardware Is Yes fetch data that page and complete in memory the instruction No page interrupt Is No Software there a free select page to block remove Yes adjust block/page tables get page no. needed Was page changed No Yes Write page back onto disk get disk address from file map read page in adjust block and page tables restart interrupt instruction
18 18 Multilevel Page Map Tables The secret to the multilevel page map table method is to avoid keeping all the page tables in memory all the time. In particular, those that are not needed should not be kept around. In figure below, we see how the two-level page table works in this example. second-level page map table virtual address frame # PMT1 PMT2 offset top-level page map table PMT PMT As an example, consider the 32-bit virtual address 0x This address corresponds to PMT1 = 1, PMT2 = 3, and offset = 4. The memory map first uses PMT1 to index into the top-level page table and obtain entry 1. It then uses PMT2 to index into the second-level page table just found and extract entry 3. This entry contains the frame number Inverted Page Map Tables Traditional page map tables of the type described so far require one entry per virtual page, since they are indexed by virtual page. If the address space consists of 2 32 bytes, with 4096 bytes per page, then over 1 million page table entries are needed. As a bare minimum, the page map table will have to be at least 4 megabytes. On larger systems, this size is probably doable. However, as 64-bit computers become more common, the situation changes drastically. If the address space is now 2 64 bytes, with 4K page size, we need over bytes for the page map table. Typing up 1 million gigabytes just for the page map table is not doable, not now and not for decades to come, if ever. One solution is the inverted page map table. In this design, there is one entry per frame in physical memory, rather than one entry per page of virtual address space. For example, with 64-bit virtual addresses, a 4K page, and 32 MB of RAM, an inverted page map table only requires 8192 entries. The entry keeps track of which (process,page) is located in which frame. However they have
19 19 a serious downside: memory mapping becomes much harder. When process n references page p, the hardware can no longer find frame number by using p as index into the page map table. Instead, it must search the entire inverted page map table for an entry (n,p). Furthermore, this search must be done on every memory reference, not just on page faults. The way out of this dilemma is to use the TLB. If the TLB can hold all of the heavily used pages, memory map can happen just as fast as with regular page map tables. Page-Replacement Algorithms How do we select a particular replacement algorithm? In general, we want the one with the lowest page-fault rate. Let F = failure function (number of page faults) S = success function If P is the number of page references in the page trace, then S + F = \ P. Performance: Success frequency function s = S / P Failure frequency function f = F / P Note! f = 1 - s 1. First in First Out (FIFO) Removes the page that has been in memory for the longest time. Example Page Reference Number of blocks = Page fault F = 9 S = 3 f = 9/12 * 100 = 75% In practical systems with many more memory blocks and much longer page traces, f is usually below 1%. FIFO Anomaly: Under certain circumstances adding more physical memory can result in poorer performance (Belady s anomaly). Example
20 20 Page Reference Number of blocks = Page fault F = 10 S = 2 f = 10/12 * 100 = 83% 2. Optimal Algorithm Replace the page that will not be used for the longest period of time. It guarantees the lowest possible page fault rate for a fixed number of blocks. Unfortunately, the algorithm is difficult to implement, because it requires future knowledge of the reference page. As a result, it is used mainly for comparison studies. Example Page Reference Number of blocks = Page fault F = 7 S = 5 f = 7/12 * 100 = 58.33% 3. Least Recently Used (LRU) If the optimal algorithm is not feasible, perhaps an approximation to the optimal algorithm is possible. If we use the recent past as an approximation of the near future, then we will replace the page that has not been used for the longest period of time. This approach is the least recently used algorithm. This strategy is the optimal algorithm looking backward in time, rather than forward. Example Page Reference Number of blocks = Page fault F = 10 S = 2 f = 10/12 * 100 = 83.33%
21 21 Neither optimal algorithm or LRU suffers from Belady s anomaly. The major problem is how to implement? Two implementation are feasible:. Counters: We associate with each page-table entry a time-of-use field, and add to the CPU a logical clock or counter. The clock is incremented for every memory reference. Whenever a reference to a page is made, the contents of the clock register are copied to the time-of-use field in the page table for that page. We replace the page with the smallest time-of-use value. Overflow of clock may be possible.. Stack: Whenever a page is referenceed, it is removed from the stack and put on the top. In this way, the top of the stack is always the most recently used page and the bottom is the LRU page. Because entries must be removed from the middle of the stack, it is best implemented by a doubly linked list, with a head and tail pointer. Note that neither implementation would be conceivable without hardware assistance. The updating of the clock fields or stack must be done for every memory reference. If we were to use an interrupt for every reference, to allow software to update such data structures, it would slow every memory reference by a factor of at least 10, hence slowing every user process by a factor of 10. Few systems could tolerate that level of overhead for memory management. 4. LRU Approximation Few computer systems provide sufficient hardware support for true LRU. Some systems provide no hardware support, and other algorithms such as FIFO must be used. Many systems provide some help, however, in the form of a reference bit. The reference bit for a page is set whenever that page is referenced. We do not know the order of use, but we know which pages were used and which were not used. This partial ordering information leads to many page-replacement algorithms that approximate LRU. We will discuss only Second-Chance Algorithm. The basic algorithm of second-chance replacement is a FIFO. When a page has been selected, however, we inspect its reference bit. If the value is 0, we proceed to replace this page. If the reference bit is 1, however we give that page a second chance and move on to select the next FIFO page. When a page gets a second chance, its reference bit is cleared. In addition, if a page is used often enough to keep its reference bit set, it will never be replaced. One way to implement the second-chance algorithm is as a circular queue. A pointer indicates which page is to be replaced next. When a block is needed, the pointer advances until it find a page with a 0 reference bit. As it advances, it clears the reference bits. Once a victim page is found, the page is replaced and the new page is inserted in the circular queue in that position. Notice that, in the worst case, when all bits are set, the pointer cycles through the whole queue, giving each page a second chance. It clears all the reference bits before selecting, the next page for replacement. Second-chance replacement degenerates to FIFO if all bits are set.
22 22 reference bits pages reference bits pages The Working Set Model The set of pages that a process is currently using is called its working set. If the entire working set is in memory, the process will run without causing many faults until it moves into another execution phase (e.g., the next pass of the compiler). If the available memory is too small to hold the entire working set, the process will cause many page faults and run slowly since executing an instruction often takes a few nanoseconds and reading in a page from the disk typically takes tens of milliseconds. A program causing page faults every few instruction is said to be thrashing. Efficient operation of a virtual memory system is dependent upon the degree of locality of reference in programs. Locality can be divided into two classes: temporal locality and spatial locality. Temporal locality refers to an observed property of most programs, i.e., once a location data or instruction is referenced, it is often referenced again very soon. This behavior can be rationalized by program constructs such as loops, frequently used variables, and sunroutines. Spatial locality refers to the probability that once a location is referenced, a nearby location will be referenced soon. This behavior can be rationalized by program constructs such as sequencing, linear data structures (e.g., array), and the tendency of programmers to put commonly used variables near one another. Advantage : - large virtual memory, a job s address space is no longer constrained by the size of physical memory. - more efficient use of memory - unconstrained multiprogramming Disadvantage : - the number of tables and amount of processor overhead for handling page interrupts are greater than in the case of the simple paged management technique.
23 23 - due to the lack of an explicit constraint on a job s address space size or amount of multiprogramming, it is necessary to develop approaches to prevent thrashing SEGMENTED MEMORY MANAGEMENT A segment can be defined as a logical grouping of information, such as a subroutine, array, or data area. The major difference between the page and segment is that a segment is a logical unit of information visible to the user s program and is of arbitrary size. A page is a physical unit of information invisible to the user s program and is of a fixed size. Example. [MAIN] = 0 0 CALL [X] <Y> C = A(6) Operating System 160 [X] = 3 0 Segment Map Table. size access status location Y: E Y 4000 [x] Y: [A] = E Y R Y 3800 [A] RW N Access field 3900 [B] = 6 E = executable - allowed 0 R = read - allowed W = write allowed [MAIN] C: Status field Y = in memory 60 N = not in memory segmented address space physical memory
24 24 Segmented memory management can offer several advantages: 1. Eliminate fragmentation, by moving segments around. 2. Provide virtual memory, by keeping only the actively used segments in main memory, the job s total address space size may exceed the physical memory size. 3. Allow dynamically growing segments or automatic bounds checking. If a segment s size must be increased during execution, the out-of-range reference can be detected by the size component of each entry in the Segment Map Table. 4. Dynamic linking and loading, by deferring the linking until the segment is explicitly referenced, unnecessary linking is avoided. 5. Facilitate shared segmented (data areas and procedures). If two jobs are using a square root routine, it is wasteful to have two separate copies exist in main memory. 6. Enforced controlled access. Access to each segment should be control (see access field). Hardware Support : Have two separate address fields in each instruction, such as 64 opcode segment number byte number Register index register for byte number This allows easy formulation of instructions, such as LOAD 1,[5] <8> where 5 is the segment number and 8 is the byte within the segment. Another approach is to generate the effective address, using linear address formation techniques, and to interpret certain bits of the resulting effective address as a segment number and the remaining bits as the byte number. For example, the IBM System/370 can interpret the 24-bite effective address as an 8-bit segment number and a 16-bit byte number as follows: segment number byte number The segment address mapping hardware required is similar to the page mapping hardware. However, several differences arise from the differences between segments and pages. Because
25 25 segments are of arbitrary size, it is necessary to check that the reference byte address is within the segment s range. Since segments may be removed from main memory and placed on secondary storage, it is necessary to indicate whether the segment is currently in memory. If it is not, an interrupt to the operating system must occur. Advantage : As mention above. Disadvantage : - Considerable compaction overhead is incurred in order to support dynamic segment growth and eliminate fragmentation - There is difficult in management variable size segments on secondary storage. - The maximum size of a segment is limited by the size of main memory. - It is necessary to develop techniques or constraints to prevent segment thrashing Segmented and Demand-Paged Memory Management One way to gain the logical benefits of segmentation and remove many of the disavtages is to combine the segmentation and paging mechanisms. For more information, see text book. Other Memory Management Schemes Swapping The early M.I.T. Compatible Time-Sharing System, CTSS, used a basic single contiguous allocation memory management scheme only one job was in memory at a time. After running for a short period, the current job s address space was swapped onto secondary storage (roll-out) to allow another job to run in main memory (rool-in). In a similar manner the partitions of a partitioned or relocatable partitioned memory management system can be swapped. Overlays A more refined form of the above swapping technique, which swaps only portions of the job s address space, is called overlay management. Overlays, normally used in conjunction with single contiguous, partitioned, or relocatable partitioned memory management, provide essentially an approximation to segmentation but without the segment address mapping hardware. Example.
26 26 A (20K) B (50K) C (30K) F(30K) D(20K) E (40K) total address 190 bytes 0 0 A 20K 20K C 50K B 50K 50K 70K 90K F D 90K E 100K physical memory overlay assignment
Addresses in the source program are generally symbolic. A compiler will typically bind these symbolic addresses to re-locatable addresses.
1 Memory Management Address Binding The normal procedures is to select one of the processes in the input queue and to load that process into memory. As the process executed, it accesses instructions and
More informationVirtual Memory. CSCI 315 Operating Systems Design Department of Computer Science
Virtual Memory CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture have been largely based on those from an earlier edition of the course text Operating
More informationChapter 8 & Chapter 9 Main Memory & Virtual Memory
Chapter 8 & Chapter 9 Main Memory & Virtual Memory 1. Various ways of organizing memory hardware. 2. Memory-management techniques: 1. Paging 2. Segmentation. Introduction Memory consists of a large array
More informationChapters 9 & 10: Memory Management and Virtual Memory
Chapters 9 & 10: Memory Management and Virtual Memory Important concepts (for final, projects, papers) addressing: physical/absolute, logical/relative/virtual overlays swapping and paging memory protection
More informationCS6401- Operating System UNIT-III STORAGE MANAGEMENT
UNIT-III STORAGE MANAGEMENT Memory Management: Background In general, to rum a program, it must be brought into memory. Input queue collection of processes on the disk that are waiting to be brought into
More informationChapter 9: Virtual-Memory
Chapter 9: Virtual-Memory Management Chapter 9: Virtual-Memory Management Background Demand Paging Page Replacement Allocation of Frames Thrashing Other Considerations Silberschatz, Galvin and Gagne 2013
More informationMemory management. Requirements. Relocation: program loading. Terms. Relocation. Protection. Sharing. Logical organization. Physical organization
Requirements Relocation Memory management ability to change process image position Protection ability to avoid unwanted memory accesses Sharing ability to share memory portions among processes Logical
More informationOperating Systems. Designed and Presented by Dr. Ayman Elshenawy Elsefy
Operating Systems Designed and Presented by Dr. Ayman Elshenawy Elsefy Dept. of Systems & Computer Eng.. AL-AZHAR University Website : eaymanelshenawy.wordpress.com Email : eaymanelshenawy@yahoo.com Reference
More informationMEMORY MANAGEMENT/1 CS 409, FALL 2013
MEMORY MANAGEMENT Requirements: Relocation (to different memory areas) Protection (run time, usually implemented together with relocation) Sharing (and also protection) Logical organization Physical organization
More informationChapter 9 Memory Management
Contents 1. Introduction 2. Computer-System Structures 3. Operating-System Structures 4. Processes 5. Threads 6. CPU Scheduling 7. Process Synchronization 8. Deadlocks 9. Memory Management 10. Virtual
More informationMemory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time
Memory Management To provide a detailed description of various ways of organizing memory hardware To discuss various memory-management techniques, including paging and segmentation To provide a detailed
More informationOperating System - Virtual Memory
Operating System - Virtual Memory Virtual memory is a technique that allows the execution of processes which are not completely available in memory. The main visible advantage of this scheme is that programs
More informationChapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1
Chapter 9 Memory Management Main Memory Operating system concepts. Sixth Edition. Silberschatz, Galvin, and Gagne 8.1 Chapter 9: Memory Management Background Swapping Contiguous Memory Allocation Segmentation
More informationCHAPTER 3 RESOURCE MANAGEMENT
CHAPTER 3 RESOURCE MANAGEMENT SUBTOPIC Understand Memory Management Understand Processor Management INTRODUCTION Memory management is the act of managing computer memory. This involves providing ways to
More informationBasic Memory Management. Basic Memory Management. Address Binding. Running a user program. Operating Systems 10/14/2018 CSC 256/456 1
Basic Memory Management Program must be brought into memory and placed within a process for it to be run Basic Memory Management CS 256/456 Dept. of Computer Science, University of Rochester Mono-programming
More informationa process may be swapped in and out of main memory such that it occupies different regions
Virtual Memory Characteristics of Paging and Segmentation A process may be broken up into pieces (pages or segments) that do not need to be located contiguously in main memory Memory references are dynamically
More informationChapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition
Chapter 8: Memory- Management Strategies Operating System Concepts 9 th Edition Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation
More informationVirtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)
Virtual Memory - programmer views memory as large address space without concerns about the amount of physical memory or memory management. (What do the terms 3-bit (or 6-bit) operating system or overlays
More informationOperating System Concepts
Chapter 9: Virtual-Memory Management 9.1 Silberschatz, Galvin and Gagne 2005 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped
More informationMemory Management. Reading: Silberschatz chapter 9 Reading: Stallings. chapter 7 EEL 358
Memory Management Reading: Silberschatz chapter 9 Reading: Stallings chapter 7 1 Outline Background Issues in Memory Management Logical Vs Physical address, MMU Dynamic Loading Memory Partitioning Placement
More informationCS399 New Beginnings. Jonathan Walpole
CS399 New Beginnings Jonathan Walpole Memory Management Memory Management Memory a linear array of bytes - Holds O.S. and programs (processes) - Each cell (byte) is named by a unique memory address Recall,
More informationPerformance of Various Levels of Storage. Movement between levels of storage hierarchy can be explicit or implicit
Memory Management All data in memory before and after processing All instructions in memory in order to execute Memory management determines what is to be in memory Memory management activities Keeping
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2018 L20 Virtual Memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 Questions from last time Page
More informationMemory and multiprogramming
Memory and multiprogramming COMP342 27 Week 5 Dr Len Hamey Reading TW: Tanenbaum and Woodhull, Operating Systems, Third Edition, chapter 4. References (computer architecture): HP: Hennessy and Patterson
More informationOperating Systems. Overview Virtual memory part 2. Page replacement algorithms. Lecture 7 Memory management 3: Virtual memory
Operating Systems Lecture 7 Memory management : Virtual memory Overview Virtual memory part Page replacement algorithms Frame allocation Thrashing Other considerations Memory over-allocation Efficient
More informationCS450/550 Operating Systems
CS450/550 Operating Systems Lecture 4 memory Palden Lama Department of Computer Science CS450/550 Memory.1 Review: Summary of Chapter 3 Deadlocks and its modeling Deadlock detection Deadlock recovery Deadlock
More informationOperating Systems Unit 6. Memory Management
Unit 6 Memory Management Structure 6.1 Introduction Objectives 6.2 Logical versus Physical Address Space 6.3 Swapping 6.4 Contiguous Allocation Single partition Allocation Multiple Partition Allocation
More informationBasic Memory Management
Basic Memory Management CS 256/456 Dept. of Computer Science, University of Rochester 10/15/14 CSC 2/456 1 Basic Memory Management Program must be brought into memory and placed within a process for it
More information6 - Main Memory EECE 315 (101) ECE UBC 2013 W2
6 - Main Memory EECE 315 (101) ECE UBC 2013 W2 Acknowledgement: This set of slides is partly based on the PPTs provided by the Wiley s companion website (including textbook images, when not explicitly
More informationChapter 4: Memory Management. Part 1: Mechanisms for Managing Memory
Chapter 4: Memory Management Part 1: Mechanisms for Managing Memory Memory management Basic memory management Swapping Virtual memory Page replacement algorithms Modeling page replacement algorithms Design
More informationMemory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts
Memory management Last modified: 26.04.2016 1 Contents Background Logical and physical address spaces; address binding Overlaying, swapping Contiguous Memory Allocation Segmentation Paging Structure of
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Spring 2018 L17 Main Memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Was Great Dijkstra a magician?
More informationMain Memory (Part I)
Main Memory (Part I) Amir H. Payberah amir@sics.se Amirkabir University of Technology (Tehran Polytechnic) Amir H. Payberah (Tehran Polytechnic) Main Memory 1393/8/5 1 / 47 Motivation and Background Amir
More informationVirtual Memory. Chapter 8
Virtual Memory 1 Chapter 8 Characteristics of Paging and Segmentation Memory references are dynamically translated into physical addresses at run time E.g., process may be swapped in and out of main memory
More informationOperating Systems Lecture 6: Memory Management II
CSCI-GA.2250-001 Operating Systems Lecture 6: Memory Management II Hubertus Franke frankeh@cims.nyu.edu What is the problem? Not enough memory Have enough memory is not possible with current technology
More informationPreview. Memory Management
Preview Memory Management With Mono-Process With Multi-Processes Multi-process with Fixed Partitions Modeling Multiprogramming Swapping Memory Management with Bitmaps Memory Management with Free-List Virtual
More informationMemory Management. Chapter 4 Memory Management. Multiprogramming with Fixed Partitions. Ideally programmers want memory that is.
Chapter 4 Memory Management Ideally programmers want memory that is Memory Management large fast non volatile 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms
More informationChapter 8: Virtual Memory. Operating System Concepts Essentials 2 nd Edition
Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating
More informationChapter 9: Virtual Memory
Chapter 9: Virtual Memory Silberschatz, Galvin and Gagne 2013 Chapter 9: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating
More informationVirtual Memory CHAPTER CHAPTER OBJECTIVES. 8.1 Background
Virtual Memory 8 CHAPTER In Chapter 7, we discussed various memory-management strategies used in computer systems. All these strategies have the same goal: to keep many processes in memory simultaneously
More informationMove back and forth between memory and disk. Memory Hierarchy. Two Classes. Don t
Memory Management Ch. 3 Memory Hierarchy Cache RAM Disk Compromise between speed and cost. Hardware manages the cache. OS has to manage disk. Memory Manager Memory Hierarchy Cache CPU Main Swap Area Memory
More informationMemory Management Ch. 3
Memory Management Ch. 3 Ë ¾¾ Ì Ï ÒÒØ Å ÔÔ ÓÐÐ 1 Memory Hierarchy Cache RAM Disk Compromise between speed and cost. Hardware manages the cache. OS has to manage disk. Memory Manager Ë ¾¾ Ì Ï ÒÒØ Å ÔÔ ÓÐÐ
More informationOperating Systems CSE 410, Spring Virtual Memory. Stephen Wagner Michigan State University
Operating Systems CSE 410, Spring 2004 Virtual Memory Stephen Wagner Michigan State University Virtual Memory Provide User an address space that is larger than main memory Secondary storage is used to
More informationFor The following Exercises, mark the answers True and False
1 For The following Exercises, mark the answers True and False 1. An operating system is an example of application software. False 2. 3. 4. 6. 7. 9. 10. 12. 13. 14. 15. 16. 17. 18. An operating system
More informationCS370 Operating Systems
CS370 Operating Systems Colorado State University Yashwant K Malaiya Fall 2017 Lecture 23 Virtual memory Slides based on Text by Silberschatz, Galvin, Gagne Various sources 1 1 FAQ Is a page replaces when
More informationChapter 6 Memory 11/3/2015. Chapter 6 Objectives. 6.2 Types of Memory. 6.1 Introduction
Chapter 6 Objectives Chapter 6 Memory Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured.
More informationEven in those cases where the entire program is needed, it may not all be needed at the same time (such is the case with overlays, for example).
Chapter 10 VIRTUAL MEMORY In Chapter 9, we discussed various memory-management strategies used in computer systems. All these strategies have the same goal: to keep many processes in memory simultaneously
More informationChapter 4 Memory Management
Chapter 4 Memory Management 4.1 Basic memory management 4.2 Swapping 4.3 Virtual memory 4.4 Page replacement algorithms 4.5 Modeling page replacement algorithms 4.6 Design issues for paging systems 4.7
More informationModule 8: Memory Management
Module 8: Memory Management Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging 8.1 Background Program must be brought into memory
More informationMemory Management. Memory Management. G53OPS: Operating Systems. Memory Management Monoprogramming 11/27/2008. Graham Kendall.
Memory Management Memory Management Introduction Graham Kendall Memory Management consists of many tasks, including Being aware of what parts of the memory are in use and which parts are not Allocating
More informationChapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition
Chapter 8: Memory- Management Strategies Operating System Concepts 9 th Edition Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation
More informationChapter 8: Memory- Management Strategies
Chapter 8: Memory Management Strategies Chapter 8: Memory- Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and
More informationModule 9: Memory Management. Background. Binding of Instructions and Data to Memory
Module 9: Memory Management Background Logical versus Physical Address Space Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging 9.1 Background Program must be brought into memory
More informationChapter 8: Memory-Management Strategies
Chapter 8: Memory-Management Strategies Chapter 8: Memory Management Strategies Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and
More informationCourse Outline. Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems
Course Outline Processes CPU Scheduling Synchronization & Deadlock Memory Management File Systems & I/O Distributed Systems 1 Today: Memory Management Terminology Uniprogramming Multiprogramming Contiguous
More informationCSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management
CSE 120 July 18, 2006 Day 5 Memory Instructor: Neil Rhodes Translation Lookaside Buffer (TLB) Implemented in Hardware Cache to map virtual page numbers to page frame Associative memory: HW looks up in
More informationOperating Systems Memory Management. Mathieu Delalandre University of Tours, Tours city, France
Operating Systems Memory Management Mathieu Delalandre University of Tours, Tours city, France mathieu.delalandre@univ-tours.fr 1 Operating Systems Memory Management 1. Introduction 2. Contiguous memory
More informationECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective. Part I: Operating system overview: Memory Management
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part I: Operating system overview: Memory Management 1 Hardware background The role of primary memory Program
More informationVirtual Memory COMPSCI 386
Virtual Memory COMPSCI 386 Motivation An instruction to be executed must be in physical memory, but there may not be enough space for all ready processes. Typically the entire program is not needed. Exception
More informationMemory Management Prof. James L. Frankel Harvard University
Memory Management Prof. James L. Frankel Harvard University Version of 5:42 PM 25-Feb-2017 Copyright 2017, 2015 James L. Frankel. All rights reserved. Memory Management Ideal memory Large Fast Non-volatile
More informationVirtual Memory Outline
Virtual Memory Outline Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating Kernel Memory Other Considerations Operating-System Examples
More informationChapter 8: Main Memory
Chapter 8: Main Memory Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel 32 and 64-bit Architectures Example:
More informationChapter 3 Memory Management: Virtual Memory
Memory Management Where we re going Chapter 3 Memory Management: Virtual Memory Understanding Operating Systems, Fourth Edition Disadvantages of early schemes: Required storing entire program in memory
More informationChapter 3 - Memory Management
Chapter 3 - Memory Management Luis Tarrataca luis.tarrataca@gmail.com CEFET-RJ L. Tarrataca Chapter 3 - Memory Management 1 / 222 1 A Memory Abstraction: Address Spaces The Notion of an Address Space Swapping
More information9.1 Background. In Chapter 6, we showed how the CPU can be shared by a set of processes. As a result of
Chapter 9 MEMORY MANAGEMENT In Chapter 6, we showed how the CPU can be shared by a set of processes. As a result of CPU scheduling, we can improve both the utilization of the CPU and the speed of the computer's
More informationOperating Systems: Internals and Design Principles. Chapter 7 Memory Management Seventh Edition William Stallings
Operating Systems: Internals and Design Principles Chapter 7 Memory Management Seventh Edition William Stallings Memory Management Requirements Memory management is intended to satisfy the following requirements:
More informationThe Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation
The Virtual Memory Abstraction Memory Management Physical Memory Unprotected address space Limited size Shared physical frames Easy to share data Virtual Memory Programs are isolated Arbitrary size All
More informationChapter 8 Virtual Memory
Chapter 8 Virtual Memory Contents Hardware and control structures Operating system software Unix and Solaris memory management Linux memory management Windows 2000 memory management Characteristics of
More informationCHAPTER 8 - MEMORY MANAGEMENT STRATEGIES
CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES OBJECTIVES Detailed description of various ways of organizing memory hardware Various memory-management techniques, including paging and segmentation To provide
More informationCOMPUTER SCIENCE 4500 OPERATING SYSTEMS
Last update: 1/6/2017 COMPUTER SCIENCE 4500 OPERATING SYSTEMS 2017 Stanley Wileman Module 10: Memory Management Part 2 In This Module 2! We conclude our study of memory management by considering how to!
More informationMemory. Objectives. Introduction. 6.2 Types of Memory
Memory Objectives Master the concepts of hierarchical memory organization. Understand how each level of memory contributes to system performance, and how the performance is measured. Master the concepts
More informationBackground. Contiguous Memory Allocation
Operating System Lecture 8 2017.5.9 Chapter 8 (Main Memory) Background Swapping Contiguous Memory Allocation Segmentation - Paging Memory Management Selection of a memory-management method for a specific
More informationCOMPUTER SCIENCE 4500 OPERATING SYSTEMS
Last update: 3/28/2017 COMPUTER SCIENCE 4500 OPERATING SYSTEMS 2017 Stanley Wileman Module 9: Memory Management Part 1 In This Module 2! Memory management functions! Types of memory and typical uses! Simple
More informationB. V. Patel Institute of Business Management, Computer &Information Technology, UTU
BCA-3 rd Semester 030010304-Fundamentals Of Operating Systems Unit: 1 Introduction Short Answer Questions : 1. State two ways of process communication. 2. State any two uses of operating system according
More informationUNIT - IV. What is virtual memory?
UNIT - IV Virtual Memory Demand Paging Process creation Page Replacement Allocation of frames Thrashing- File Concept - Access Methods Directory Structure File System Mounting File Sharing Protection.
More informationChapter 8: Main Memory. Operating System Concepts 9 th Edition
Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel
More informationProcesses and Tasks What comprises the state of a running program (a process or task)?
Processes and Tasks What comprises the state of a running program (a process or task)? Microprocessor Address bus Control DRAM OS code and data special caches code/data cache EAXEBP EIP DS EBXESP EFlags
More informationLast Class: Deadlocks. Where we are in the course
Last Class: Deadlocks Necessary conditions for deadlock: Mutual exclusion Hold and wait No preemption Circular wait Ways of handling deadlock Deadlock detection and recovery Deadlock prevention Deadlock
More informationChapter 8: Virtual Memory. Operating System Concepts
Chapter 8: Virtual Memory Silberschatz, Galvin and Gagne 2009 Chapter 8: Virtual Memory Background Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files Allocating
More informationChapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition
Chapter 7: Main Memory Operating System Concepts Essentials 8 th Edition Silberschatz, Galvin and Gagne 2011 Chapter 7: Memory Management Background Swapping Contiguous Memory Allocation Paging Structure
More informationChapter 8: Memory Management. Operating System Concepts with Java 8 th Edition
Chapter 8: Memory Management 8.1 Silberschatz, Galvin and Gagne 2009 Background Program must be brought (from disk) into memory and placed within a process for it to be run Main memory and registers are
More informationPractice Exercises 449
Practice Exercises 449 Kernel processes typically require memory to be allocated using pages that are physically contiguous. The buddy system allocates memory to kernel processes in units sized according
More informationCS Operating Systems
CS 4500 - Operating Systems Module 9: Memory Management - Part 1 Stanley Wileman Department of Computer Science University of Nebraska at Omaha Omaha, NE 68182-0500, USA June 9, 2017 In This Module...
More informationCS Operating Systems
CS 4500 - Operating Systems Module 9: Memory Management - Part 1 Stanley Wileman Department of Computer Science University of Nebraska at Omaha Omaha, NE 68182-0500, USA June 9, 2017 In This Module...
More informationMemory Management. 3. What two registers can be used to provide a simple form of memory protection? Base register Limit Register
Memory Management 1. Describe the sequence of instruction-execution life cycle? A typical instruction-execution life cycle: Fetches (load) an instruction from specific memory address. Decode the instruction
More informationOperating Systems. User OS. Kernel & Device Drivers. Interface Programs. Memory Management
Operating Systems User OS Kernel & Device Drivers Interface Programs Management Brian Mitchell (bmitchel@mcs.drexel.edu) - Operating Systems 1 Management is an important resource that needs to be managed
More informationVirtual Memory. Virtual Memory. Demand Paging. valid-invalid bit. Virtual Memory Larger than Physical Memory
Virtual Memory Virtual Memory CSCI Operating Systems Design Department of Computer Science Virtual memory separation of user logical memory from physical memory. Only part of the program needs to be in
More informationMemory Management. Disclaimer: some slides are adopted from book authors slides with permission 1
Memory Management Disclaimer: some slides are adopted from book authors slides with permission 1 CPU management Roadmap Process, thread, synchronization, scheduling Memory management Virtual memory Disk
More informationPart Three - Memory Management. Chapter 8: Memory-Management Strategies
Part Three - Memory Management Chapter 8: Memory-Management Strategies Chapter 8: Memory-Management Strategies 8.1 Background 8.2 Swapping 8.3 Contiguous Memory Allocation 8.4 Segmentation 8.5 Paging 8.6
More informationChapter 8: Main Memory
Chapter 8: Main Memory Silberschatz, Galvin and Gagne 2013 Chapter 8: Memory Management Background Swapping Contiguous Memory Allocation Segmentation Paging Structure of the Page Table Example: The Intel
More informationChapter 9: Memory Management. Background
1 Chapter 9: Memory Management Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging 9.1 Background Program must be brought into memory and placed within a process for
More informationChapter 8: Memory Management. Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging
Chapter 8: Memory Management Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging 1 Background Memory management is crucial in better utilizing one of the most important
More informationOperating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015
Operating Systems 09. Memory Management Part 1 Paul Krzyzanowski Rutgers University Spring 2015 March 9, 2015 2014-2015 Paul Krzyzanowski 1 CPU Access to Memory The CPU reads instructions and reads/write
More informationVirtual Memory. CSCI 315 Operating Systems Design Department of Computer Science
Virtual Memory CSCI 315 Operating Systems Design Department of Computer Science Notice: The slides for this lecture were based on those Operating Systems Concepts, 9th ed., by Silberschatz, Galvin, and
More informationLecture 7. Memory Management
Lecture 7 Memory Management 1 Lecture Contents 1. Memory Management Requirements 2. Memory Partitioning 3. Paging 4. Segmentation 2 Memory Memory is an array of words or bytes, each with its own address.
More informationOutlook. Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium
Main Memory Outlook Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium 2 Backgound Background So far we considered how to share
More informationMemory Management. Dr. Yingwu Zhu
Memory Management Dr. Yingwu Zhu Big picture Main memory is a resource A process/thread is being executing, the instructions & data must be in memory Assumption: Main memory is infinite Allocation of memory
More information8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management
Part Four - Memory Management 8.1 Background Chapter 8: Memory-Management Management Strategies Program must be brought into memory and placed within a process for it to be run Input queue collection of
More informationUnit II: Memory Management
Unit II: Memory Management 1. What is no memory abstraction? The simplest memory abstraction is no abstraction at all. Early mainframe computers (before 1960), early minicomputers (before 1970), and early
More informationMemory Management. Disclaimer: some slides are adopted from book authors slides with permission 1
Memory Management Disclaimer: some slides are adopted from book authors slides with permission 1 Recap Paged MMU: Two main Issues Translation speed can be slow TLB Table size is big Multi-level page table
More information