Signal types (some of them) Networks and Operating Systems ( ) Chapter 5: Memory Management. Where do signals come from?

Similar documents
Chapter 5: Memory Management

Last time: Scheduling. Goals today. Recap: Hardware support for synchronization. Disabling interrupts. Disabling interrupts

Chapter 4: Synchronization

Networks and Opera/ng Systems Chapter 14: Synchroniza/on

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems ( ) Chapter 6: Demand Paging

Example: Linux o(1) scheduler. Networks and Operating Systems ( ) Chapter 4: Synchronization. Problems with UNIX Scheduling

Chapter 6: Demand Paging

ECE 650 Systems Programming & Engineering. Spring 2018

Multi-Process Systems: Memory (2) Memory & paging structures: free frames. Memory & paging structures. Physical memory

Signals. Joseph Cordina

Chapter 8 Memory Management

8.1 Background. Part Four - Memory Management. Chapter 8: Memory-Management Management Strategies. Chapter 8: Memory Management

Part Three - Memory Management. Chapter 8: Memory-Management Strategies

CS 33. Signals Part 1. CS33 Intro to Computer Systems XXII 1 Copyright 2017 Thomas W. Doeppner. All rights reserved.

IMPLEMENTATION OF SIGNAL HANDLING. CS124 Operating Systems Fall , Lecture 15

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 24

Memory Management. Memory Management

Chapter 9: Memory Management. Background

File Systems. OS Overview I/O. Swap. Management. Operations CPU. Hard Drive. Management. Memory. Hard Drive. CSI3131 Topics. Structure.

Administrivia. - If you didn t get Second test of class mailing list, contact cs240c-staff. Clarification on double counting policy

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 18

Chapter 8: Memory Management

Topics: Memory Management (SGG, Chapter 08) 8.1, 8.2, 8.3, 8.5, 8.6 CS 3733 Operating Systems

Recall: Address Space Map. 13: Memory Management. Let s be reasonable. Processes Address Space. Send it to disk. Freeing up System Memory

Chapter 8: Main Memory

Pentium/Linux Memory System March 17, 2005

Memory management. Last modified: Adaptation of Silberschatz, Galvin, Gagne slides for the textbook Applied Operating Systems Concepts

Lesson 3. The func procedure allows a user to choose the action upon receipt of a signal.

Chapter 8: Main Memory

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2015 Lecture 23

Embedded Systems Programming

Memory Management. CSCI 315 Operating Systems Design Department of Computer Science

Memory System Case Studies Oct. 13, 2008

CS307: Operating Systems

Module 8: Memory Management

Chapter 8: Memory-Management Strategies

virtual memory. March 23, Levels in Memory Hierarchy. DRAM vs. SRAM as a Cache. Page 1. Motivation #1: DRAM a Cache for Disk

I.-C. Lin, Assistant Professor. Textbook: Operating System Principles 7ed CHAPTER 8: MEMORY

Chapter 7: Main Memory. Operating System Concepts Essentials 8 th Edition

Chapter 8: Memory Management Strategies

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

Chapter 8: Memory- Manage g me m nt n S tra r t a e t gie i s

I.-C. Lin, Assistant Professor. Textbook: Operating System Concepts 8ed CHAPTER 8: MEMORY

Learning Outcomes. An understanding of page-based virtual memory in depth. Including the R3000 s support for virtual memory.

CIS Operating Systems Memory Management Address Translation for Paging. Professor Qiang Zeng Spring 2018

Systems Programming and Computer Architecture ( ) Timothy Roscoe

COSC243 Part 2: Operating Systems

CHAPTER 8 - MEMORY MANAGEMENT STRATEGIES

Administrivia. Lab 1 due Friday 12pm. We give will give short extensions to groups that run into trouble. But us:

CHAPTER 8: MEMORY MANAGEMENT. By I-Chen Lin Textbook: Operating System Concepts 9th Ed.

CS 3733 Operating Systems:

P6/Linux Memory System Nov 11, 2009"

Chapter 8: Main Memory. Operating System Concepts 9 th Edition

Chapter 8: Memory- Management Strategies. Operating System Concepts 9 th Edition

Goals of Memory Management

Virtual memory Paging

CS 537: Introduction to Operating Systems Fall 2015: Midterm Exam #1

Chapter 9 Memory Management

Operating Systems 2010/2011

Virtual memory - Paging

The process. one problem

Address spaces and memory management

Chapter 8: Main Memory

Recap: Memory Management

PROCESS VIRTUAL MEMORY PART 2. CS124 Operating Systems Winter , Lecture 19

CS307 Operating Systems Main Memory

Memory Management. Contents: Memory Management. How to generate code? Background

Chapter 8 & Chapter 9 Main Memory & Virtual Memory

Address Translation. Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Multiprogramming on physical memory

PROCESS VIRTUAL MEMORY. CS124 Operating Systems Winter , Lecture 18

Chapter 8: Main Memory

Processes and Virtual Memory Concepts

Virtual Memory. CS 351: Systems Programming Michael Saelee

Lecture Summary Operating Systems on 8/3/2015. Table of Contents 1 Introduction Processes Scheduling Synchronization...

Memory Management. Memory

Logical versus Physical Address Space

CSE 153 Design of Operating Systems

CS3600 SYSTEMS AND NETWORKS

CSE 421: Introduction to Operating Systems

6 - Main Memory EECE 315 (101) ECE UBC 2013 W2

Virtual Memory II CSE 351 Spring

Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation

Chapter 4 Multithreaded Programming

CS24: INTRODUCTION TO COMPUTING SYSTEMS. Spring 2018 Lecture 23

CSci 4061 Introduction to Operating Systems. (Advanced Control Signals)

Main Memory: Address Translation

Memory Management Cache Base and Limit Registers base limit Binding of Instructions and Data to Memory Compile time absolute code Load time

Signals, Synchronization. CSCI 3753 Operating Systems Spring 2005 Prof. Rick Han

Virtual Memory. Computer Systems Principles

VIRTUAL MEMORY II. Jo, Heeseung

Main Memory. CISC3595, Spring 2015 X. Zhang Fordham University

14 May 2012 Virtual Memory. Definition: A process is an instance of a running program

CS 537: Introduction to Operating Systems Fall 2015: Midterm Exam #1

Lecture 8 Memory Management Strategies (chapter 8)

Virtual Memory. Samira Khan Apr 27, 2017

Signals. CSC209: Software Tools and Systems Programming. Furkan Alaca & Paul Vrbik

System Programming. Signals I

Memory Management. Dr. Yingwu Zhu

Address Translation. Tore Larsen Material developed by: Kai Li, Princeton University

Transcription:

ADRIAN PERRIG & TORSTEN HOEFLER Networks and Operating Systems (5-006-00) Chapter 5: Memory Management http://support.apple.com/kb/ht56 (Jul. 05) Description: The ios kernel has checks to validate that the user-mode pointer and length passed to the copyin and copyout functions would not result in a user-mode process being able to directly access kernel memory. The checks were not being used if the length was smaller than one page. This issue was ed through additional validation of the arguments to copyin and copyout. Signal types (some of them) Name Description / meaning Default action SIGHUP Hangup / death of controlling process Terminate process SIGINT Interrupt character typed (CTRL-C) Hanging Terminate up the process phone (terminal) SIGQUIT Quit character typed (CTRL-\) Core dump SIGKILL kill -9 <process id> Terminate process SIGSEGV Segfault (invalid memory reference) Can t be Core dump disabled! SIGPIPE Write on pipe with no reader Terminate process SIGALRM alarm() goes off E.g., after other side Terminate of process SIGCHLD pipe has closed it Child process stopped or terminated Ignored SIGSTOP Stop process Stop SIGCONT Continue process Used by debuggers (e.g., Continue gdb) and shell (CTRL-Z) SIGUSR, User-defined signals Terminate process Etc. see man 7 signal for the full list Where do signals come from? Memory management subsystem: SIGSEGV, etc. IPC system SIGPIPE Other user processes SIGUSR,, SIGKILL, SIGSTOP, SIGCONT Sending a signal to a process From the Unix shell: $ kill HUP From C: #include <signal.h> int kill(pid_t pid, int signo); Kill is a rather unfortunate name Kernel trap handlers SIGFPE The TTY Subsystem SIGINT, SIGQUIT, SIGHUP Unix signal handlers Change what happens when a signal is delivered: Default action Ignore signal Call a user-defined function in the process the signal handler Oldskool: signal() Test your C parsing skills: #include <signal.h> void (*signal(int sig, void (*handler)(int))) (int); Allows signals to be used like user-space interrupts What does this mean? 5 6

Oldskool: signal() Unix signal handlers void (*signal(int sig, void (*handler)(int))) (int); Signal handler can be called at any time! Unpacking this: A handler looks like void my_handler(int); Signal takes two arguments An integer (the signal type, e.g. SIGPIPE) A pointer to a handler function and returns a pointer to a handler function The previous handler, Executes on the current user stack If process is in kernel, may need to retry current system call Can also be set to run on a different (alternate) stack User process is in undefined state when signal delivered Special handler arguments: SIG_IGN (ignore), SIG_DFL (default), SIG_ERR (error code) 7 8 Implications There is very little you can safely do in a signal handler! Can t safely access program global or static variables Some system calls are re-entrant, and can be called including signal() and sigaction() full list see in man 7 signal Many C library calls cannot (including _r variants!) Can sometimes execute a longjmp if you are careful What happens if another signal arrives? Multiple signals If multiple signals of the same type are to be delivered, Unix will discard all but one. If signals of different types are to be delivered, Unix will deliver them in any order. Serious concurrency problem: How to make sense of this? 9 0 A better signal()posix sigaction() Signals as upcalls #include <signal.h> int sigaction(int signo, const struct sigaction *act, struct sigaction *oldact); Signal struct sigaction { handler }; New action for signal signo void (*sa_handler)(int); Signals to be blocked in this sigset_t sa_mask; handler (cf., fd_set) int sa_flags; void (*sa_sigaction)(int, siginfo_t *, void *); More sophisticated signal handler (depending on flags) Previous action is returned Particularly specialized (and complex) form of an upcall Kernel RPC to user process Other OSes use upcalls much more heavily Including Barrelfish Scheduler Activations : dispatch every process using an upcall instead of return Very important structuring concept for systems!

Our Small Quiz True or false (raise hand) Mutual exclusion on a multicore can be achieved by disabling interrupts Test and set can be used to achieve mutual exclusion Test and set is more powerful than compare and swap The CPU retries load-linked/store conditional instructions after a conflict The best spinning time is x the context switch time Priority inheritance can prevent priority inversion The receiver never blocks in asynchronous IPC The sender blocks in synchronous IPC if the receiver is not ready A pipe file descriptor can be sent to a different process Pipes do not guarantee ordering Named pipes in Unix behave like files A process can catch all signals with handlers Signals always trigger actions at the signaled process One can implement a user-level tasking library using signals Signals of the same type are buffered in the kernel Goals of Memory Management Allocate physical memory to applications Protect an application s memory from others Allow applications to share areas of memory Data, code, etc. Create illusion of a whole space Virtualization of the physical space Create illusion of more memory than you have Tomorrow demand paging Book: OSPP Chapter 8 In CASP last semester we saw: Assorted uses for virtual memory x86 paging Page table format Translation process Translation lookaside buffers (TLBs) Interaction with caches Performance implications For application code, e.g., matrix multiply http://en.wikipedia.org/wiki/imagery 5 6 What s new this semester? Wider range of memory management hardware Base/limit, segmentation Inverted page tables, etc. How the OS uses the hardware Demand paging and swapping Page replacement algorithms Frame allocation policies Terminology Physical : as seen by the memory unit Virtual or Logical : issued by the processor Loads Stores Instruction fetches Possible others (e.g., TLB fills) 7 8

Memory management. Allocating physical es to applications. Managing the name translation of virtual es to physical es. Performing access control on memory access Functions & usually involve the hardware Memory Management Unit (MMU) Simple(st) scheme: partitioned memory 9 0 Base and Limit Registers A pair of base and limit registers define the logical space 0x0000000 0x000000 0x5600ba0 Operating System Process Process 0x5600ba0 base Issue: binding Base isn t known until load time Options:. Compiled code must be completely position-independent, or. Relocation Register maps compiled es dynamically to physical es 0x8ff000 Process 0x9ef70 limit 0xB000000 0xfffffff Dynamic relocation using a relocation register Contiguous allocation CPU Logical 6 Relocation register 000 + MMU Physical 6 Memory Main memory usually into two partitions: Resident OS, usually in low memory with interrupt vector User processes in high memory Relocation registers protect user processes from. each other. changing operating-system code and data Registers: Base register contains value of smallest physical Limit register contains range of logical es each logical must be less than the limit register MMU maps logical dynamically

Hardware Support for Relocation and Limit Registers Base & Limit summary CPU Logical Limit register yes Relocation register Physical < + Memory no Simple to implement (addition & compare) Physical memory fragmentation Only a single contiguous range How to share data between applications? How to share program text (code)? How to load code dynamically? Total logical space physical memory trap: ing error 5 6 Segmentation Segmentation Generalize base + limit: Physical memory divided into segments Logical = (segment id, offset) Segment identifier supplied by: Explicit instruction reference Explicit processor segment register Implicit instruction or process state 7 8 User s view of a program Segmentation reality shared library stack symbol table heap main program 0 0 logical logical physical memory 9 0

Segmentation hardware Segmentation reality CPU s d limit base Segment table 0 limit 00 000 00 00 000 base 500 5000 00 600 800 6000 5000 600 800 00 yes < + no Physical Memory 0 segment table 800 800 500 0 trap: ing error logical physical memory Segmentation architecture Segment table each entry has: base starting physical of segment limit length of the segment Segment-table base register (STBR) Current segment table location in memory Segment-table length register (STLR) Current size of segment table segment number s is legal if s < STLR Segmentation summary Fast context switch Simply reload STBR/STLR Fast translation loads, compares Segment table can be cached Segments can easily be shared Segments can appear in multiple segment tables Physical layout must still be contiguous (External) fragmentation still a problem Paging Paging Solves contiguous physical memory problem Process can always fit if there is available free memory Divide physical memory into frames Size is power of two, e.g., 096 bytes Divide logical memory into pages of the same size For a program of n pages in size: Find and allocate n frames Load program Set up page table to translate logical pages to physical frames 5 6

Page table jargon Recall: P6 Page tables (bit) Page tables maps VPNs to PFNs Page table entry = PTE VPN = Virtual Page Number Upper bits of virtual or logical PFN = Page Frame Number Upper bits of physical or logical Same number of bits (usually). Logical : 0 VPN VPN VPN VPO 0 PFN PFO PDE p= PTE p= data PDBR Page directory Page table Data page Pages, page directories, page tables all kb 7 8 x86-6 paging Problem: performance 9 VPN Page Map Table 9 9 9 VPN VPN VPN Page Directory Pointer Table Page Directory Table Page Table VPO Virtual Every logical memory access needs more than two physical memory accesses Load page table entry PFN Load desired location Performance half as fast as with no translation Solution: cache page table entries PMLE PDPE PDE PTE PDBR 0 PFN PFO Physical 9 0 Translating with the P6 TLB In fact, x86 combines segmentation and paging CPU 0 virtual VPN VPO TLB miss 6 TLBT partial TLB hit TLBI PDE... page table translation PTE TLB hit 0 PFN PFO physical. Partition VPN into TLBT and TLBI.. Is the PTE for VPN cached in set TLBI?. Yes: Check permissions, build physical. No: Read PTE (and PDE if not cached) from memory and build physical Segments do (still) have uses Thread-local state Sandboxing (Google NativeClient, etc.) Virtual machine monitors (Xen, etc.) CPU logical segmentation unit linear paging unit physical physical memory

Effective Access Time Associative Lookup = time units Assume memory cycle time is time unit Assume no other CPU caches Hit ratio = % time that a page number is found in the TLB; Depends on locality and TLB entries (coverage) Page Protection Then Effective Access Time: EAT = ( + ) + ( + )( ) = + Assuming singlelevel page table. Exercise: work this out for the P6 -level table Memory protection Associate protection info with each frame Actually no - with the PTE. Valid-invalid bit valid page mapping is legal invalid page is not part of space, i.e., entry does not exist Requesting an invalid fault A page fault, or. Remember the P6 Page Table Entry (PTE) 9 8 7 6 5 0 Page physical base Avail G 0 D A CD WT U/S R/W P= Page base : 0 most significant bits of physical page (forces pages to be KB aligned) Avail: available for system programmers G: global page (don t evict from TLB on task switch) D: dirty (set by MMU on writes) A: accessed (set by MMU on reads and writes) CD: cache disabled or enabled WT: write-through or write-back cache policy for this page U/S: user/supervisor R/W: read/write P: page is present in physical memory () or not (0) 5 6 P6 protection bits 9 8 7 6 5 0 Page physical base Avail G 0 D A CD WT U/S R/W P= Page base : 0 most significant bits of physical page (forces pages to be KB aligned) Avail: available for system programmers G: global page (don t evict from TLB on task switch) D: dirty (set by MMU on writes) A: accessed (set by MMU on reads and writes) CD: cache disabled or enabled WT: write-through or write-back cache policy for this page U/S: user/supervisor R/W: read/write P: page is present in physical memory () or not (0) P bit can be used to trap on any access (read or write) Protection information Protection information typically includes: Readable Writeable Executable (can fetch to i-cache) Reference bits used for demand paging Observe: same attributes can be (and are) associated with segments as well 7 8

Shared pages example Page sharing text text text data Process P text text text data Process P 6 page table for P 6 page table for P text text text data Process P 6 7 page table for P 0 5 6 7 8 9 0 data data code code code data 9 50 Shared pages Shared code One copy of read-only code shared among processes Shared code (often) appears in same location in the logical space of all processes Data segment is not shared, different for each process But still mapped at same (so code can find it) Per-process protection Protection bits are stored in page table Plenty of bits available in PTEs independent of frames themselves Different processes can share pages Each page can have different protection to different processes Many uses! E.g., debugging, communication, copy-on-write, etc. Private code and data Allows code to be relocated anywhere in space 5 5 Page table structures Page Table Structures Problem: simple linear page table is too big Solutions:. Hierarchical page tables. Virtual memory page tables (VAX). Hashed page tables. Inverted page tables Saw these last semester. 5 5

# Hashed page tables Hashed page table VPN is hashed into table Hash bucket has chain of logical->physical page mappings Hash chain is traversed to find match. Can be fast, but can be unpredicable Often used for Portability Software-loaded TLBs (e.g., MIPS) logical p d r d physical hash function q s p r physical memory hash table 55 56 # Inverted page table Inverted page table architecture One system-wide table now maps PFN -> VPN One entry for each real page of memory Contains VPN, and which process owns the page Bounds total size of all page information on machine Hashing used to locate an entry efficiently Examples: PowerPC, ia6, UltraSPARC CPU logical pid p d i d physical Physical memory search i pid p page table 57 58 The need for more bookkeeping Most OSes keep their own translation info Per-process hierarchical page table (Linux) System wide inverted page table (Mach, MacOS) Why? Portability Tracking memory objects Software virtual physical translation Physical virtual translation TLB shootdown 59 60

TLB management TLB management Recall: the TLB is a cache. Machines have many MMUs on many cores many TLBs Problem: TLBs should be coherent. Why? Security problem if mappings change E.g., when memory is reused Core Core Core Process ID VPN PPN acce ss 0 0x005 0x0 r/w 0x0f8 0x r/w 0 0x005 0x0 r/w 0x000 0x05 read 0 0x0f8 0x r/w 0x000 0x05 read 6 6 TLB management TLB management Core Core Process ID VPN PPN acce ss 0 0x005 0x0 r/w 0x0f8 0x r/w 0 0x005 0x0 r/w 0x000 0x05 read Change to read only Core Core Process ID VPN PPN acce ss 0 0x005 0x0 r/w 0x0f8 0x r/w 0 0x005 0x0 r/w 0x000 0x05 read Change to read only Core 0 0x0f8 0x r/w 0x000 0x05 read Core 0 0x0f8 0x r/w 0x000 0x05 read 6 6 TLB management Core Core Core Process ID VPN PPN acce ss 0 0x005 0x0 r/w 0x0f8 0x r/w 0 0x005 0x0 r/w 0x000 0x05 read 0 0x0f8 0x r/w 0x000 0x05 read Change to read only Process 0 on core can only continue once shootdown is complete! Keeping TLBs consistent. Hardware TLB coherence Integrate TLB mgmt with cache coherence Invalidate TLB entry when PTE memory changes Rarely implemented. Virtual caches Required cache flush / invalidate will take care of the TLB High context switch cost! Most processors use physical (last-level) caches. Software TLB shootdown Most common OS on one core notifies all other cores - Typically an IPI Each core provides local invalidation. Hardware shootdown instructions Broadcast special access on the bus Interpreted as TLB shootdown rather than cache coherence message E.g., PowerPC architecture 65 66

Friday: demand paging 67