Virtual Memory Today Segmentation Paging A good, if common example
Virtual memory system Goals Transparency Programs should not know that memory is virtualized; the OS +HW multiplex memory among processes behind the scenes Efficiency Both in time and space; not making processes too slow and efficiently using physical memory Protection Isolating the address spaces of processes from each other; i.e., a process should not be able to access or affect the memory of any other process or the OS itself 2
Virtual memory IBM OS/360 Split memory in n parts (possible!= sizes) A process per partition Program Code Heap Operating System Partition 1 Partition 2 0 64K 128K (free) Stack Partition 3 Partition 4 256K 320K How do deal with large address spaces with potentially a lot of free space? 512K 3
Virtual memory system Goals Transparency Programs should not know that memory is virtualized; the OS +HW multiplex memory among processes behind the scenes Program flexibility processes can run in machines with less physical memory than they need Efficiency Both in time and space; not making processes too slow and efficiently using physical memory Protection Isolating the address spaces of processes from each other; i.e., a process should not be able to access or affect the memory of any other process or the OS itself 4
Virtual memory Keep in memory only what s needed Don t need full address space resident in memory OS uses main memory as a cache Overlay approach (~1960s) Implemented by user Easy on the OS, hard on the programmer Symbol tables 20K Overlay for a two-pass assembler: Common routines 30K Pass 1 70KB Pass 2 80KB Symbol Table 20KB Common Routines 30KB Total 200KB 70K Pass 1 Overlay driver 10K Pass 2 80K Two overlays: 120 + 130KB 5
Segmentation Hide the complexity, let the OS do the job Related idea, generalize base/limit Only used space is allocated Each segment can grow independently Yes, segmentation fault (segfault) comes from this! Stack <base> <limit> Program Code Heap (free) Stack Heap <base> <limit> Stack Code <base> <limit> Heap Program Code 6
Which segment? How does the HW know which segment an address refers to? Explicit approach With three segments, use two bits of the virtual address The rest are offset into the segment 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Segment Offset 7
Which segment? Explicit approach Assuming base+limits were in an array Segment = (VirtAddr * SEG_MASK) >> SEG_SHIFT; Offset = VirtAddr & OFFSET_MASK; if (Offset >= Limit[Segment]) /* protection fault */ else PhysAddr = Base[Segment] + Offset... Implicit approach how was the address form? PC? from the stack or base pointer? 8
Sharing and fragmentation With a little bit of hardware support sharing memory segments (e.g., code) Just a few bits per segments for read/write/execute rights Finer grain segmentation (e.g., Multics, Burroughs B5000) Need a segment table to track them The OS could keep track of segments in use and manage memory more efficiently 9
Sharing and fragmentation Many small variable-sized segments External fragmentation! Compaction is expensive Free-list management algorithms (e.g., best fit, worst fit, ) may help but not avoid it 10
And then came paging Avoid fragmentation from variable-sized blocks Use fixed-sized blocks pages Virtual address space split into pages Each a contiguous range of addresses Physical memory split into page frames Virtual memory Page 0 Page 1 Page 2 Physical memory Page frame 0 Page frame 1 Page N Page frame M 11
Virtual memory paging Pages and page frames are generally = size Many processors support multiple page sizes (e.g., x86-64: 4KB, 2MB, 1GB) Pages are mapped onto frames Doing the translation OS + MMU Key not all pages have to be in at once If page is in memory, system does the mapping else, OS is told to get the missing page and re-execute the failed instruction 12
Virtual memory paging Good for everyone Developers memory seems a contiguous address space with size independent of hardware Simple and flexible no assumptions on how memory is used Mem manager can efficiently use mem with minimal internal (small units) & no external fragmentation (fixed size units) Protection since processes can t access each other s memory 13
Address translation with paging Virtual to physical address Two parts virtual page number and offset 13 12 11 10 9 8 7 6 5 4 3 2 1 0 Virtual Page # Offset Virtual page number index into a page table Page table maps virtual pages to page frames Managed by the OS One entry per page in virtual address space Physical address page number and offset 14
Address translation with paging Virtual address Virtual page # Page table offset Physical memory Page frame 0 Page frame # Physical address Page frame # offset Page frame i Page frame N Each process has its own page table 15
And now a short break xkcd 16
Pages, page frames and tables A simple example with 64KB virtual address space 4KB pages 32KB physical address space 16 pages and 8 page frames Virtual address space 60-64K X 56-60K X Try to access : MOV REG, 0 Virtual address 0 Page frame 2 Physical address 8192 52-56K X 48-52K X 44-48K 7 40-44K X 36-40K 5 32-36K X 28-32K X 24-28K X 20-24K 3 16-20K 4 12-16K 0 8-12K 6 4-8K 1 0-4K 2 Physical memory address 28-32K 24-28K 20-24K 16-20K 12-16K 8-12K 4-8K 0-4K 17
Pages, page frames and tables A simple example with 64KB virtual address space 4KB pages 32KB physical address space 16 pages and 8 page frames Virtual address space 60-64K X 56-60K X Try to access : MOV REG, 8192 Virtual address 8192 Page frame 6 Physical address 24576 MOV REG, 20500 Virtual address 20500 (20480 + 20) Page frame 3 Physical address 20+12288 52-56K X 48-52K X 44-48K 7 40-44K X 36-40K 5 32-36K X 28-32K X 24-28K X 20-24K 3 16-20K 4 12-16K 0 8-12K 6 4-8K 1 0-4K 2 Physical memory address 28-32K 24-28K 20-24K 16-20K 12-16K 8-12K 4-8K 0-4K 18
Since virtual memory >> physical memory Use a present/absent bit MMU checks If not there, page fault to the OS (trap) OS picks a victim (?) sends victim to disk brings new one updates page table MOVE REG, 32780 Virtual address 32780 Virtual page 8, byte 12 (32768+12) Page is unmapped page fault! Virtual address space 60-64K X 56-60K X 52-56K X 48-52K X 44-48K 7 40-44K X 36-40K 5 32-36K X 28-32K X 24-28K X 20-24K 3 16-20K 4 12-16K 0 8-12K 6 4-8K 1 0-4K 2 Physical memory address 28-32K 24-28K 20-24K 16-20K 12-16K 8-12K 4-8K 0-4K 19
Translation in action MMU with 16 4KB pages Page # (first 4 bits) index into page table If not there Page fault Else Output register + 12 bit offset 15 bit physical address 15 14 13 12 11 2 Physical Page # 14 1 13 12 1 0 000 0 000 0 000 0 000 0 111 1 110 1 Offset 11 10 9 8 7 6 5 4 3 2 1 0 0 0 0 0 0 0 0 0 0 1 0 0 110 1 0 001 1 010 1 Present Bit 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 1 0 0 Virtual Page # Offset 20
Page table entry An opportunity there s a PTE lookup per memory ref., what else can we do with it? Looking at the details Caching disabled Modified Present/absent Page frame number Referenced Protection Page frame number the most important field Protection 1 bit for R&W or R or 3 bits for RWX 21
Page table entry Looking at the details Caching disabled Modified Present/absent Page frame number Referenced Protection Present/absent bit Says whether or not the virtual address is used Modified (M): dirty bit Set when a write to the page has occurred Referenced (R): Has it being used? To ensure we are not reading from cache (D) Key for pages that map onto device registers rather than memory 22
Segmentation and paging Paging pros and cons Easy to allocate physical memory Naturally leads to virtual memory Address translation time Page tables can be large Segmentation pros and cons It s more logical Facilitates sharing and reuse But all the problems of variable partitions 23
Segmentation w/ paging - MULTICS Large segment? Page them e.g MULTICS, x86 and x86-64 Process: 2 18 segments of ~64K words (36-bit) Process has a segment table (itself a paged segment) Most (but not all) segments are paged Page table Virtual Address Descriptor segment. Page entry Segment # (18b) Page # (6b) Offset (10b). Segment desc. Page entry Page entry Segment desc. Segment desc.. Page entry Page entry Page entry 24
Segmentation w/ paging - MULTICS To and from the segment table Segment table address found in a Descriptor Base Register One entry per segment Segment descriptor indicates if in memory and points to page table Address of segment in secondary memory in another table Page table Virtual Address Descriptor segment. Page entry Segment # (18b) Page # (6b) Offset (10b). Segment desc. Page entry Page entry Address within the segment Segment desc. Segment desc.. Page entry Page entry Page entry 25
Segmentation w/ paging - MULTICS With memory references Segment # to get segment descriptor If segment in mem, segment s page table is in memory Protection violation? Look at the page table s entry - is page in memory? Add offset to page origin to get word location to speed things up cache (TLB) 18 9 1 1 1 3 3 Segment Descriptor Main memory address of the page table Segment length (in pages) Page size 0 1024 words 1 = 64 words Segment paged? Misc bits Protection bits 26
Considerations with page tables Two key issues with page tables Mapping must be fast Done on every memory reference, at least 1 per instruction With large address spaces, page tables will be large 32b & 4KB page 12 bit offset, 20 bit page # ~ 1million PTE 64b & 4KB page 2 12 (offset) + 2 52 pages ~ 4.5x10 15!!! Simplest solutions Page table in registers Fast during execution, $$$ & slow to context switch Page table in memory & Page Table Base Register (PTBR) Fast to context switch & cheap, but slow during execution 27
Next time Bigger and faster 28