CS5460: Operating Systems Lecture 14: Memory Management (Chapter 8)

Similar documents
Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CS 153 Design of Operating Systems Winter 2016

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. November 15, MIT Fall 2018 L20-1

VIRTUAL MEMORY II. Jo, Heeseung

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

Virtual to physical address translation

CS 333 Introduction to Operating Systems. Class 11 Virtual Memory (1) Jonathan Walpole Computer Science Portland State University

16 Sharing Main Memory Segmentation and Paging

Virtual Memory: From Address Translation to Demand Paging

Virtual Memory. Virtual Memory

CS162 - Operating Systems and Systems Programming. Address Translation => Paging"

15 Sharing Main Memory Segmentation and Paging

Address Translation. Tore Larsen Material developed by: Kai Li, Princeton University

Page Tables. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Virtual Memory. Kevin Webb Swarthmore College March 8, 2018

CSE 120 Principles of Operating Systems Spring 2017

COS 318: Operating Systems. Virtual Memory and Address Translation

CSE 120 Principles of Operating Systems

Memory Management. Dr. Yingwu Zhu

CS 318 Principles of Operating Systems

EECS 470. Lecture 16 Virtual Memory. Fall 2018 Jon Beaumont

CS 152 Computer Architecture and Engineering. Lecture 11 - Virtual Memory and Caches

Recap: Memory Management

Memory Management Topics. CS 537 Lecture 11 Memory. Virtualizing Resources

CSE 451: Operating Systems Winter Page Table Management, TLBs and Other Pragmatics. Gary Kimura

Chapter 9 Memory Management

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Main Memory (Part II)

Today: Segmentation. Last Class: Paging. Costs of Using The TLB. The Translation Look-aside Buffer (TLB)

COSC3330 Computer Architecture Lecture 20. Virtual Memory

Chapter 8. Virtual Memory

The Virtual Memory Abstraction. Memory Management. Address spaces: Physical and Virtual. Address Translation

CS399 New Beginnings. Jonathan Walpole

ECE 571 Advanced Microprocessor-Based Design Lecture 12

Multi-level Translation. CS 537 Lecture 9 Paging. Example two-level page table. Multi-level Translation Analysis

Lecture 13: Virtual Memory Management. CSC 469H1F Fall 2006 Angela Demke Brown

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Memory Management. An expensive way to run multiple processes: Swapping. CPSC 410/611 : Operating Systems. Memory Management: Paging / Segmentation 1

Virtual Memory I. Jo, Heeseung

CS3600 SYSTEMS AND NETWORKS

John Wawrzynek & Nick Weaver

Memory: Page Table Structure. CSSE 332 Operating Systems Rose-Hulman Institute of Technology

Virtual Memory. Samira Khan Apr 27, 2017

Virtual Memory Virtual memory first used to relive programmers from the burden of managing overlays.

CS370 Operating Systems

Computer Architecture. Lecture 8: Virtual Memory

Virtual Memory. CS 351: Systems Programming Michael Saelee

Paging. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Operating Systems. 09. Memory Management Part 1. Paul Krzyzanowski. Rutgers University. Spring 2015

Simple idea 1: load-time linking. Our main questions. Some terminology. Simple idea 2: base + bound register. Protection mechanics.

CS 537 Lecture 6 Fast Translation - TLBs

Chapter 8 Memory Management

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

Virtual or Logical. Logical Addr. MMU (Memory Mgt. Unit) Physical. Addr. 1. (50 ns access)

Operating Systems. Paging... Memory Management 2 Overview. Lecture 6 Memory management 2. Paging (contd.)

Paging. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Computer Science 146. Computer Architecture

Virtual Memory, Address Translation

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

A Typical Memory Hierarchy

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

CS 152 Computer Architecture and Engineering. Lecture 9 - Virtual Memory

CSE 4/521 Introduction to Operating Systems. Lecture 14 Main Memory III (Paging, Structure of Page Table) Summer 2018

MEMORY: SWAPPING. Shivaram Venkataraman CS 537, Spring 2019

ECE 598 Advanced Operating Systems Lecture 14

Chapter 8: Memory-Management Strategies

Virtual Memory. Motivations for VM Address translation Accelerating translation with TLBs

CS 152 Computer Architecture and Engineering. Lecture 8 - Address Translation

Handout 4 Memory Hierarchy

VIRTUAL MEMORY. Operating Systems 2015 Spring by Euiseong Seo

Memory Management. To improve CPU utilization in a multiprogramming environment we need multiple programs in main memory at the same time.

A Typical Memory Hierarchy

Virtual Memory: From Address Translation to Demand Paging

Memory Management (2)

CS307 Operating Systems Main Memory

SE-292 High Performance Computing. Memory Hierarchy. R. Govindarajan

Chapter 8: Main Memory

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

How to create a process? What does process look like?

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

csci 3411: Operating Systems

Modern Virtual Memory Systems. Modern Virtual Memory Systems

238P: Operating Systems. Lecture 5: Address translation. Anton Burtsev January, 2018

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Cache Performance and Memory Management: From Absolute Addresses to Demand Paging. Cache Performance

Memory management: outline

Memory management: outline

Virtual Memory Oct. 29, 2002

CS 61C: Great Ideas in Computer Architecture. Lecture 23: Virtual Memory. Bernhard Boser & Randy Katz

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

Virtual memory Paging

Transcription:

CS5460: Operating Systems Lecture 14: Memory Management (Chapter 8)

Important from last time We re trying to build efficient virtual address spaces Why?? Virtual / physical translation is done by HW and SW cooperating We want to make the common case fast in HW Key insight: Treat a virtual address as a collection of separate elements High bits identify a segment or page Low bits identify offset into the segment or page

Base and bounds Quick Review Maps a single contiguous chunk of virtual address space to a single contiguous chunk of physical memory Fast, small HW, and safe, but inflexible and lots of external fragmentation Segmentation Small number of segment registers (base-bounds per segment) Each segment maps contiguous VM to contiguous physical mem More flexible and less fragmentation than B+B, but same issues

Modern hardware and OSes use paging Pages are like segments, but fixed size So the bounds register goes away External fragmentation goes away Since pages are small (4 or 8 KB, often) there are a lot of them So page table has to go into RAM Page table might be huge Accessing the page table requires a memory reference by the hardware to perform the translation» First, load the page table entry» Second, access the data the user asked for Today we look at how these problems are fixed

Problems Page table too large to store on processor die Two memory references per load or store issued by the user program is unacceptable Solution: Cache recently-used page table entries TLB == translation lookaside buffer TLB is a fixed-size cache of recently used page table entries If TLB hit rate sufficiently large à translation as fast as segments If TLB hit rate poor à load/store performance suffers badly Key issue: Access locality What is effective access time with/without TLB?

Paging Unit CPU issues load/store TLB Memory management unit 1. Compare VPN to all TLB tags 2. If no match, need TLB refill» SW à trap to OS» HW à HW table walker 3. Checks if VA is valid CPU Core Virtual Pg# 1 Offset Tag PPN State 0A1FE 20104 104A3 4010D 3D11C 0401B» If not, generate trap 4. Concatenate PPN to Offset to form physical address This all needs to be very fast Why? 4 Phys. Pg# Offset Match? 2 No Trap or HW refill Valid? Trap No 3 Physical Address

TLB Issues How large should the TLB be? What limits TLB size? How does a TLB interact with context switches?

TLB Discussion What are typical TLB sizes and organization? 16-1024 entries, fully associative, sub-cycle access latency TLB misses take 10-100 cycles Modern chips have multiple levels of TLB Perform TLB lookup in parallel to L1 cache tag load» Requires virtually tagged L1 cache Why is TLB often fully associative? TLB reach Amount of address space that can be mapped by TLB entries What are architectural trends in terms of:» TLB size and associativity» Main memory size Example: 64-entry TLB w/ 4KB pages à 256 KB reach Good news: 90-10 rule (90% of accesses to 10% of address space)

More Paging Discussion What are typical state bits? Valid bit Protection bits (read-only, execute-only, read-write) Referenced bit Modified bit (real PTEs tend to have more) Used to support demand paging What happens during a context switch? PCB needs to be extended to contain (or point to) page table Save/restore page table base register (for hardware-filled TLB) Flush TLB (if PID not in tag field)

Page Table Organization Simple flat full table Simple, fast lookups Huge!!! Hierarchical Two or more levels of tables» Leaf tables have PTEs Fairly simple, fairly fast Size roughly proportional to allocated memory size Inverted One entry per physical page Entry contains VPN and ASID Small table size Lookups more expensive» Hashing helps» Poor memory locality Root Page Table Valid, R/W, Hierarchical Virtual address PID Page# hash(pid Page#) Main Memory Physical address Offset PPN (n) Offset Search Inverted Page Table n

page number page offset p1 p2 d p1 p2 d outer page table inner page table desired page

Multi-level page tables Why does this save space? Early virtual memory systems used a single-level page table VAX, 386 used 2-level page table Sparc uses 3-level Alpha, IA-64, x86-64 have 4 levels In many OSes, page tables other than top-level can themselves be paged

How do you share memory? Entries in different process page tables map to same PPN Questions: What does this imply about granularity of sharing? Can we share only a 4-byte int? Sharing Can we set it up so that only one of the sharers can write the region? How can we use this idea to implement copy on write? P 1 VA VP5 VP4 1 VP3 1 VP2 1 VP1 1 VP0 VP1 1 VP3 1 VP2 2 VP2 1 VP4 2 VP5 VP0 VP4 1 VP1 2 P 2 VA VP5 VP4 2 VP2 2 VP1 2 VP0 Physical Address

Initializing an Address Space Determine number of pages needed Examine header information in executable file Determine virtual address layout Header file determines size of various segments OS specifies positioning (e.g., base of code, location of stack) Allocate necessary pages and initialize contents Initialize page table entry to contain VPNà PPN, set valid, Mark current TLB entries as invalid (i.e., TLB flush) Start process PC should point at valid address in code segment Allocated stack space as process touches new pages

Superpages Problem: TLB reach shrinking as % of memory size Solution: Superpages Permit pages to have variable size (back to segments?) For simplicity, restrict generality:» Power of two region sizes» Aligned to superpage size (e.g., 1MB superpage aligned on 1MB bdy)» Contiguous VPN Offset Tag PPN Size State 014BEC02 Problem: Restrictions limit applicability. How?

Example: Superpage Usage Virtual Addresses 0x00004000 0x00005000 0x00006000 0x00007000 Physical Addresses 0x80240000 0x80241000 0x80242000 0x80243000 Both virtual and physical ranges are contiguous virtual physical size valid access dirty read-only supervisor Virtual and physical ranges aligned on superpage boundary 00004 80240 002 Y Y Y Page table Y N Size: Denotes number of base pages in superpage (2 size )

Superpage Discussion What are good candidates for superpages? Kernel or at least the portions of kernel that are not paged Frame buffer Large wired data structures» Scientific applications being run in batch mode» In-core databases How might OS exploit superpages? Simple: Few hardwired regions (e.g., kernel and frame buffer) Improved: Provide system calls so applications can request it Holy grail: OS watches page access behavior and determines which pages are hot enough to warrant superpaging Why might you not want to use superpages?

Problem with paging: Paged Segmentation Large VA space à large page table Idea: Combine segmentation and paging Divide address space into large contiguous segments Divide segments into smaller fixed-size pages Divide main memory into pages Two-stage address translation Benefits: Reduced page table size Can share segments easily Problems: Extra complexity Seg# Page# Offset base 0 limit 0 base 1 limit 1 base n limit n Physical Address frame 0 prot 0 frame 1 prot 1 frame n Virtual Address Y >? Trap frame# prot n Offset

Paged Segmentation Sharing: Share individual pages à copy page table entries Share entire segments à copy segment table entries Protection: Can be associated with either segment or page tables Implementation Segment tables in MMU Page tables in main memory Practice x86 supports pages & segments RISCs support only paging Virtual Memory Physical Memory Virtual Memory Seg3 Seg2 Seg1 Seg3 Seg2 Seg1 Seg0 Seg0 Sharing example: Two processes Two common segments (0&3)

Bringing It All Together Paging is major improvement over segmentation Eliminates external fragmentation problem (no compaction) Permits flexible sharing between processes Enables processes to run when only partially loaded (more soon!) Significant issues remain Page tables are much larger than segment tables TLB required à effectiveness seriously impacts performance More complex OS routines for managing page table» But management of physical address space much easier Translation overheads tend to be higher Question: How can we let processes run when only some of their pages are present in main memory?