CPE 631 Lecture 08: Virtual Memory

Similar documents
Instruction Set Principles and Examples. Appendix B

Another View of the Memory Hierarchy. Lecture #25 Virtual Memory I Memory Hierarchy Requirements. Memory Hierarchy Requirements

Memory Hierarchy Requirements. Three Advantages of Virtual Memory

Page 1. Structure of von Nuemann machine. Instruction Set - the type of Instructions

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Instruction Set Architecture (ISA)

COSC 6385 Computer Architecture. Instruction Set Architectures

Lecture 4: Instruction Set Architecture

Instruction Set Design

CS4617 Computer Architecture

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

CPS104 Computer Organization and Programming Lecture 16: Virtual Memory. Robert Wagner

CPS 104 Computer Organization and Programming Lecture 20: Virtual Memory

UC Berkeley CS61C : Machine Structures

Typical Processor Execution Cycle

CS152 Computer Architecture and Engineering Lecture 18: Virtual Memory

ECE4680 Computer Organization and Architecture. Virtual Memory

CS61C : Machine Structures

ECE468 Computer Organization and Architecture. Virtual Memory

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

CPU Architecture and Instruction Sets Chapter 1

Virtual Memory: From Address Translation to Demand Paging

EITF20: Computer Architecture Part2.1.1: Instruction Set Architecture

ECE232: Hardware Organization and Design

CS 61C: Great Ideas in Computer Architecture. Virtual Memory

Lecture 4: Instruction Set Design/Pipelining

Announcements HW1 is due on this Friday (Sept 12th) Appendix A is very helpful to HW1. Check out system calls

Lecture 3 Machine Language. Instructions: Instruction Execution cycle. Speaking computer before voice recognition interfaces

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

Virtual Memory. Virtual Memory

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

Instruction Set Architecture. "Speaking with the computer"

Modern Computer Architecture

ISA and RISCV. CASS 2018 Lavanya Ramapantulu

UC Berkeley CS61C : Machine Structures

Math 230 Assembly Programming (AKA Computer Organization) Spring 2008

Virtual Memory: From Address Translation to Demand Paging

Handout 4 Memory Hierarchy

CS162 Operating Systems and Systems Programming Lecture 14. Caching (Finished), Demand Paging

Virtual Memory Virtual memory first used to relive programmers from the burden of managing overlays.

CSE 120 Principles of Operating Systems

55:132/22C:160, HPCA Spring 2011

CS 153 Design of Operating Systems Winter 2016

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Memory hierarchy review. ECE 154B Dmitri Strukov

Chapter 2A Instructions: Language of the Computer

CSCE 5610: Computer Architecture

Graduate Computer Architecture. Chapter 2. Instruction Set Principles

EITF20: Computer Architecture Part2.1.1: Instruction Set Architecture

UCB CS61C : Machine Structures

CSCI 402: Computer Architectures. Instructions: Language of the Computer (1) Fengguang Song Department of Computer & Information Science IUPUI

Instruction Set II. COMP 212 Computer Organization & Architecture. COMP 212 Fall Lecture 7. Instruction Set. Quiz. What is an Instruction Set?

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

Instructions: Language of the Computer

ECE232: Hardware Organization and Design

Chapter 8. Virtual Memory

Agenda. CS 61C: Great Ideas in Computer Architecture. Virtual Memory II. Goals of Virtual Memory. Memory Hierarchy Requirements

EECS151/251A Spring 2018 Digital Design and Integrated Circuits. Instructors: John Wawrzynek and Nick Weaver. Lecture 19: Caches EE141

CS162 - Operating Systems and Systems Programming. Address Translation => Paging"

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

16 Sharing Main Memory Segmentation and Paging

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

15 Sharing Main Memory Segmentation and Paging

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 2. Instruction Set Design. Computer Architectures. software. hardware. Which is easier to change/design??? Tien-Fu Chen

Instruction Set Architecture ISA ISA

ECE 486/586. Computer Architecture. Lecture # 7

CSE 120 Principles of Operating Systems Spring 2017

Instruction Set. Instruction Sets Ch Instruction Representation. Machine Instruction. Instruction Set Design (5) Operation types

Chapter 2: Instructions How we talk to the computer

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Virtual Machines and Dynamic Translation: Implementing ISAs in Software

EC-801 Advanced Computer Architecture

CPE 631 Lecture 04: CPU Caches

CSE 120. Translation Lookaside Buffer (TLB) Implemented in Hardware. July 18, Day 5 Memory. Instructor: Neil Rhodes. Software TLB Management

Topic 18: Virtual Memory

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Caches Part 1

Page 1. Review: Address Segmentation " Review: Address Segmentation " Review: Address Segmentation "

Instruction Sets Ch 9-10

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Instruction Sets Ch 9-10

virtual memory. March 23, Levels in Memory Hierarchy. DRAM vs. SRAM as a Cache. Page 1. Motivation #1: DRAM a Cache for Disk

LECTURE 12. Virtual Memory

CS61C Review of Cache/VM/TLB. Lecture 26. April 30, 1999 Dave Patterson (http.cs.berkeley.edu/~patterson)

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

Review. Manage memory to disk? Treat as cache. Lecture #26 Virtual Memory II & I/O Intro

ECE331: Hardware Organization and Design

Computer Organization MIPS ISA

COSC 6385 Computer Architecture - Instruction Set Principles

Rui Wang, Assistant professor Dept. of Information and Communication Tongji University.

Memory Technologies. Technology Trends

COSC 6385 Computer Architecture. Defining Computer Architecture

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Recap: Memory Management

Virtual Memory - Objectives

Virtual Memory 2. Today. Handling bigger address spaces Speeding translation

Topic 18 (updated): Virtual Memory

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

Transcription:

Lecture 08: Virtual Virtual : Topics Why virtual memory? Virtual to physical address translation Page Table Translation Lookaside Buffer (TLB) Aleksandar Milenković, milenka@ece.uah.edu Electrical and Computer Engineering University of Alabama in Huntsville 07/02/2005 UAH- 2 Another View of Hierarchy Thus far { Next: { Virtual Upper Level Regs Faster Instructions, Operands Cache L2 Cache Blocks Disk Blocks Pages Files Tape Larger Lower Level 07/02/2005 UAH- 3 Why Virtual? Today computers run multiple processes, each with its own address space Too expensive to dedicate a full-addressspace worth of memory for each process Principle of Locality allows caches to offer speed of cache memory with size of DRAM memory DRAM can act as a cache for secondary storage (disk) Virtual Virtual memory divides physical memory into blocks and allocate them to different processes 07/02/2005 UAH- 4

Virtual Motivation Historically virtual memory was invented when programs became too large for physical memory Allows OS to share memory and protect programs from each other (main reason today) Provides illusion of very large memory sum of the memory of many jobs greater than physical memory allows each job to exceed the size of physical mem. Allows available physical memory to be very well utilized Exploits memory hierarchy to keep average access time low Mapping Virtual to Physical Program with 4 pages (A, B, C, D) Any chunk of Virtual assigned to any chuck of Physical ( page ) Virtual 0 A 4 KB B 8 KB C 12 KB D Disk D Physical 0 4 KB B 8 KB 12 KB A 16 KB 20 KB C 24 KB 28 KB 07/02/2005 UAH- 5 07/02/2005 UAH- 6 Virtual Terminology Virtual address used by the programmer; CPU produces virtual addresses Virtual Space collection of such addresses (Physical or Real) address of word in physical memory mapping or address translation process of virtual to physical address translation More on terminology Page or Segment Block Page Fault or Fault Miss 07/02/2005 UAH- 7 Comparing the 2 levels of hierarchy Parameter Block/Page Hit time Miss Penalty (Access time) (Transfer time) Miss Rate Placement: Mapping Replacement: Write Policy 16B 128B 1 3 cc 8 150 cc 6 130 cc 2 20 cc 0.1 10% DM or N-way SA 25-45 bit physical address to 14-20 bit cache address WB or WT L1 Cache LRU or Random (HW cntr.) 4KB 64KB 50 150 cc 1M 10M cc (Page Fault ) 800K 8M cc 200K 2M cc 0.00001 0.001% Fully associative (OS allows pages to be placed anywhere in main memory) 32-64 bit virtual address to 25-45 bit physical address LRU (SW controlled) 07/02/2005 UAH- 8 WB Virtual

Paging vs. Segmentation Two classes of virtual memory Pages - fixed size blocks (4KB 64KB) Segments - variable size blocks (1B 64KB/4GB) Hybrid approach: Paged segments a segment is an integral number of pages Paging Segmentation Code Data Paging vs. Segmentation: Pros and Cons Words per address Programmer visible? Replacing a block use inefficiency Efficient disk traffic One Page Invisible to AP Trivial (all blocks are the same size) Internal fragmentation (unused portion of page) Yes (adjust page size to balance access time and transfer time) Segment Two (segment + offset) May be visible to AP Hard (must find contiguous, variable-size unused portion External fragmentation (unused pieces of main memory) Not always (small segments transfer few bytes) 07/02/2005 UAH- 9 07/02/2005 UAH- 10 Virtual to Physical Addr. Translation Program operates in its virtual address space virtual address (inst. fetch load, store) HW mapping physical address (inst. fetch load, store) Physical memory (incl. caches) Each program operates in its own virtual address space Each is protected from the other OS can decide where each goes in memory Combination of HW + SW provides virtual physical mapping Virtual Mapping Function Virtual Physical 31 10 9 0 Virtual Page No. translation Offset 29 10 9 0 Phys. Page No. Offset Use table lookup ( Page Table ) for mappings: Virtual Page number is index Virtual Mapping Function Physical Offset = Virtual Offset Physical Page Number (P.P.N. or Page frame ) = PageTable[Virtual Page Number] 07/02/2005 UAH- 11 07/02/2005 UAH- 12

Mapping: Page Table Virtual : virtual page no. Page Table Base Reg index into Page Table offset Page Table Access Physical Page Rights Number Valid physical page no. offset Physical 07/02/2005 UAH- 13 Page Table A page table is an operating system structure which contains the mapping of virtual addresses to physical locations There are several different ways, all up to the operating system, to keep this data around Each process running in the operating system has its own page table State of process is PC, all registers, plus page table OS changes page tables by changing contents of Page Table Base Register 07/02/2005 UAH- 14 Page Table Page Table Entry (PTE) Format Valid bit indicates if page is in memory OS maps to disk if Not Valid (V = 0) Contains mappings for every possible virtual page V. Valid V. A.R. Access Rights A.R. P.P.T. Physical Page Number P.P.T If valid, also check if have permission to use page: Access Rights (A.R.) may be Read Only, Read/Write, Executable. P.T.E. Virtual Problem #1 Not enough physical memory! Only, say, 64 MB of physical memory N processes, each 4GB of virtual memory! Could have 1K virtual pages/physical page! Spatial Locality to the rescue Each page is 4 KB, lots of nearby references No matter how big program is, at any time only accessing a few pages Working Set : recently used pages 07/02/2005 UAH- 15 07/02/2005 UAH- 16

VM Problem #2: Fast Translation Typical TLB Format PTs are stored in main memory Every memory access logically takes at least twice as long, one access to obtain physical address and second access to get the data Observation: locality in pages of data, must be locality in virtual addresses of those pages Remember the last translation(s) translations are kept in a special cache called Translation Look-Aside Buffer or TLB TLB must be on chip; its access time is comparable to cache Virtual Addr. Physical Addr. Dirty Tag: Portion of virtual address Data: Physical Page number Valid Access Rights Dirty: since use write back, need to know whether or not to write page to disk when replaced Ref: Used to help calculate LRU on replacement Valid: Entry is valid Ref Access rights: R (read permission), W (write perm.) 07/02/2005 UAH- 17 07/02/2005 UAH- 18 Translation Look-Aside Buffers TLBs usually small, typically 128-256 entries Like any other cache, the TLB can be fully associative, set associative, or direct mapped Processor VA TLB Lookup miss Translation hit PA hit miss Cache Data Main 07/02/2005 UAH- 19 TLB Translation Steps Assume 32 entries, fully-associative TLB (Alpha AXP 21064) 1: Processor sends the virtual address to all tags 2: If there is a hit (there is an entry in TLB with that Virtual Page number and valid bit is 1) and there is no access violation, then 3: Matching tag sends the corresponding Physical Page number 4: Combine Physical Page number and Page Offset to get full physical address 07/02/2005 UAH- 20

What if not in TLB? What if the data is on disk? Option 1: Hardware checks page table and loads new Page Table Entry into TLB Option 2: Hardware traps to OS, up to OS to decide what to do When in the operating system, we don't do translation (turn off virtual memory) The operating system knows which program caused the TLB fault, page fault, and knows what the virtual address desired was requested So it looks the data up in the page table If the data is in memory, simply add the entry to the TLB, evicting an old entry from the TLB We load the page off the disk into a free block of memory, using a DMA transfer Meantime we switch to some other process waiting to be run When the DMA is complete, we get an interrupt and update the process's page table So when we switch back to the task, the desired data will be in memory 07/02/2005 UAH- 21 07/02/2005 UAH- 22 What if we don't have enough memory? Page Replacement Algorithms We chose some other page belonging to a program and transfer it onto the disk if it is dirty If clean (other copy is up-to-date), just overwrite that data in memory We chose the page to evict based on replacement policy (e.g., LRU) And update that program's page table to reflect the fact that its memory moved somewhere else First-In/First Out in response to page fault, replace the page that has been in memory for the longest period of time does not make use of the principle of locality: an old but frequently used page could be replaced easy to implement (OS maintains history thread through page table entries) usually exhibits the worst behavior Least Recently Used selects the least recently used page for replacement requires knowledge of past references more difficult to implement, good performance 07/02/2005 UAH- 23 07/02/2005 UAH- 24

Page Replacement Algorithms (cont d) Selecting a Page Size Not Recently Used (an estimation of LRU) A reference bit flag is associated to each page table entry such that Ref flag = 1 - if page has been referenced in recent past Ref flag = 0 - otherwise If replacement is necessary, choose any page frame such that its reference bit is 0 OS periodically clears the reference bits Reference bit is set whenever a page is accessed Balance forces in favor of larger pages versus those in favoring smaller pages Larger page Reduce size PT (save space) Larger caches with fast hits More efficient transfer from the disk or possibly over the networks Less TLB entries or less TLB misses Smaller page better conserve space, less wasted storage (Internal Fragmentation) shorten startup time, especially with plenty of small processes 07/02/2005 UAH- 25 07/02/2005 UAH- 26 VM Problem #3: Page Table too big! Page Table Shrink Example 4GB Virtual 4 KB page => ~ 1 million Page Table Entries => 4 MB just for Page Table for 1 process, 25 processes => 100 MB for Page Tables! Problem gets worse on modern 64-bits machines Solution is Hierarchical Page Table Single Page Table Virtual Page Number Offset 20 bits 12 bits Multilevel Page Table Virtual Super Page Number Page Number Offset 10 bits 10 bits 12 bits Only have second level page table for valid entries of super level page table If only 10% of entries of Super Page Table are valid, then total mapping size is roughly 1/10-th of single level page table 07/02/2005 UAH- 27 07/02/2005 UAH- 28

64 MB 2-level Page Table Physical 2nd Level Page Tables Super PageTable Virtual Stack Heap The Big Picture Virtual address Yes page fault? try to read from PT No No TLB access TLB hit? No Yes try to read from cache Write? Yes Set in TLB 0 Static Code replace page from disk TLB miss stall No cache miss stall Cache hit? Yes Deliver data to CPU cache/buffer mem. write 07/02/2005 UAH- 29 07/02/2005 UAH- 30 The Big Picture (cont d) L1-8K, L2-4M, Page-8K, cl-64b, VA-64b, PA-41b 28? Things to Remember Apply Principle of Locality Recursively Manage memory to disk? Treat as cache Included protection as bonus, now critical Use Page Table of mappings vs. tag/data in cache Spatial locality means Working Set of pages is all that must be in memory for process to run Virtual memory to Physical Translation too slow? Add a cache of Virtual to Physical Translations, called a TLB Need more compact representation to reduce memory size cost of simple 1-level page table (especially 32 64-bit address) 07/02/2005 UAH- 31 07/02/2005 UAH- 32

Instruction Set Principles and Examples Outline What is Instruction Set Architecture? Classifying ISA Elements of ISA Programming Registers Type and Size of Operands ing Modes Types of Operations Instruction Encoding Role of Compilers 07/02/2005 UAH- 34 Shift in Applications Area What is ISA? Desktop Computing emphasizes performance of programs with integer and floating point data types; little regard for program size or processor power Servers - used primarily for database, file server, and web applications; FP performance is much less important for performance than integers and strings Embedded applications value cost and power, so code size is important because less memory is both cheaper and lower power DSPs and media processors, which can be used in embedded applications, emphasize real-time performance and often deal with infinite, continuous streams of data Architects of these machines traditionally identify a small number of key kernels that are critical to success, and hence are often supplied by the manufacturer. Instruction Set Architecture the computer visible to the assembler language programmer or compiler writer ISA includes Programming Registers Operand Access Type and Size of Operands Instruction Set ing Modes Instruction Encoding 07/02/2005 UAH- 35 07/02/2005 UAH- 36

Classifying ISA Stack Architectures - operands are implicitly on the top of the stack Accumulator Architectures - one operand is implicitly accumulator General-Purpose Register Architectures - only explicit operands, either registers or memory locations register-memory: access memory as part of any instruction register-register: access memory only with load and store instructions 07/02/2005 UAH- 37 Classifying ISA (cont d) Stack TOS Processor For classes: Stack, Accumulator, Register-, Load-store (or Register-Register) Accumulator Processor Register- Processor 07/02/2005 UAH- 38 Register-Register Processor Example: Code Sequence for C=A+B Push Push Add Pop Stack A B C 4 instr. 3 mem. op. Accumulator Load Add Store A B C 3 instr. 3 mem. op. Register- Load Add Store 3 instr. 3 mem. op. R1,A R3,R1,B C, R3 Load-store Load R1,A Load R2,B Add R3,R1,R2 Store C,R3 4 instr. 3 mem. op. Development of ISA Early computers used stack or accumulator architectures accumulator architecture easy to build stack architecture closely matches expression evaluation algorithms (without optimisations!) GPR architectures dominate from 1975 registers are faster than memory registers are easier for a compiler to use hold variables memory traffic is reduced, and the program speedups code density is increased (registers are named with fewer bits than memory locations) 07/02/2005 UAH- 39 07/02/2005 UAH- 40

Programming Registers Ideally, use of GPRs should be orthogonal; i.e., any register can be used as any operand with any instruction May be difficult to implement; some CPUs compromise by limiting use of some registers How many registers? PDP-11: 8; some reserved (e.g., PC, SP); only a few left, typically used for expression evaluation VAX 11/780: 16; some reserved (e.g., PC, SP, FP); enough left to keep some variables in registers RISC: 32; can keep many variables in registers Operand Access Number of operands 3; instruction specifies result and 2 source operands 2; one of the operands is both a source and a result How many of the operands may be memory addresses in ALU instructions? Number of memory addresses 0 1 2/3 Maximum number of Examples operands 3 SPARC, MIPS, HP-PA, PowerPC, Alpha, ARM, Trimedia 2 Intel 80x86, Motorola 68000, TI TMS320C54 2/3 VAX 07/02/2005 UAH- 41 07/02/2005 UAH- 42 Operand Access: Comparison Type Reg-Reg (0-3) Reg-Mem (1,2) Mem-Mem (3,3) Data can be accessed without loading first. Instruction format tends to be easy to decode and yields good density. Most compact. Advantages Simple, fixed length instruction encoding. Simple code generation. Instructions take similar number of clocks to execute. Disadvantages Higher inst. count. Some instructions are short and bit encoding may be wasteful. Source operand is destroyed in a binary operation. Clocks per instruction varies by operand location. Large variation in instruction size and clocks per instructions. bottleneck. Type and Size of Operands How is the type of an operand designated? encoded in the opcode; most often used (eg. Add, AddU) data are annotated with tags that are interpreted by hw Common operand types character (1 byte) {ASCII} half word (16 bits) {short integers, 16-bit Java Unicode} word (32 bits) {integers} single-precision floating point (32 bits) double-precision floating point (64 bits) binary packed/unpacked decimal - used infrequently 07/02/2005 UAH- 43 07/02/2005 UAH- 44

Type and Size of Operands (cont d) Distribution of data accesses by size (SPEC) Double word: 0% (Int), 69% (Fp) Word: 74% (Int), 31% (Fp) Half word: 19% (Int), 0% (Fp) Byte: 7% (Int), 0% (Fp) Summary: a new 32-bit architecture should support 8-, 16- and 32-bit integers; 64-bit floats 64-bit integers may be needed for 64-bit addressing others can be implemented in software Operands for media and signal processing Pixel 8b (red), 8b (green), 8b (blue), 8b (transparency of the pixel) Fixed-point (DSP) cheap floating-point Vertex (graphic operations) x, y, z, w ing Modes ing mode - how a computer system specifies the address of an operand constants registers memory locations I/O addresses addressing since 1980 almost every machine uses addresses to level of 1 byte => How do byte addresses map onto 32 bits word? Can a word be placed on any byte boundary? 07/02/2005 UAH- 45 07/02/2005 UAH- 46 Interpreting es Big Endian address of most significant byte = word address (xx00 = Big End of the word); IBM 360/370, MIPS, Sparc, HP-PA Little Endian address of least significant byte = word address (xx00 = Little End of the word); Intel 80x86, DEC VAX, DEC Alpha Alignment require that objects fall on address that is multiple of their size 07/02/2005 UAH- 47 a a+4 a+8 a+c Interpreting es a a+1 a+2 a+3 7 0 0x00 0x01 0x02 0x03 msb 0 msb 0 Big Endian lsb 31 0x00 0x01 0x02 0x03 Little Endian 0x03 0x02 0x01 0x00 Aligned Not Aligned 07/02/2005 UAH- 48 lsb 31

Addr. mode Register Immediate Displacem. Reg. indirect Indexed Direct Mem. indirect Autoincr. Autodecr. ing Modes: Examples Example ADD R4,R3 ADD R4,#3 ADD R4,100(R1) ADD R4,(R1) ADD R4,(R1+R2) ADD R4,(1001) ADD R4,@(R3) ADD R4,(R3)+ ADD R4,-(R3) Meaning Regs[R4] Regs[R4]+Regs[R3] Regs[R4] Regs[R4]+3 Regs[R4] Regs[R4]+Mem[100+Regs[R1]] Regs[R4] Regs[R4]+Mem[Regs[R1]] Regs[R4] Regs[R4]+Mem[Regs[R1]+Regs[R2]] Regs[R4] Regs[R4]+Mem[1001] Regs[R4] Regs[R4]+Mem[Mem[Regs[R3]]] Regs[R4] Regs[R4]+Mem[Regs[R3]] Regs[R3] Regs[R3] + d Regs[R3] Regs[R3] d Regs[R4] Regs[R4]+Mem[Regs[R3]] When used a value is in register for constants local variables accessing using a pointer array addressing (base + offset) addr. static data if R3 keeps the address of a pointer p, this yields *p stepping through arrays within a loop; d defines size of an el. similar as previous Scaled ADD Regs[R4] Regs[R4] + to index arrays R4,100(R2)[R3] Mem[100+Regs[R2]+Regs[R3]*d] 07/02/2005 UAH- 49 ing Mode Usage 3 programs measured on machine with all address modes (VAX) register direct modes are not counted (one-half of the operand references) PC-relative is not counted (exclusively used for branches) Results Displacement 42% avg, (32-55%) Immediate 33% avg, (17-43%) Register indirect 13% avg, (3-24%) Scaled 7% avg, (0-16%) indirect 3% avg, (1-6%) Misc. 2% avg, (0-3%) 75% 85% 07/02/2005 UAH- 50 Displacement, immediate size Displacement 1% of addresses require > 16 bits 25% of addresses require > 12 bits Immediate If they need to be supported by all operations? Loads: 10% (Int), 45% (Fp) Compares: 87% (Int), 77% (Fp) ALU operations: 58% (Int), 78% (Fp) All instructions: 35% (Int), 10% (Fp) What is the range of values? 50% - 70% fit within 8 bits 75% - 80% fit within 16 bits ing modes: Summary Data addressing modes that are important: Displacement, Immediate, Register Indirect Displacement size should be 12 to 16 bits Immediate should be 8 to 16 bits 07/02/2005 UAH- 51 07/02/2005 UAH- 52

ing Modes for Signal Processing DSPs deal with continuous infinite stream of data => circular buffers Modulo or Circular addressing mode FFT shuffles data at the start or end 0 (000) => 0 (000), 1 (001) => 4 (100), 2 (010) => 2 (010), 3 (011) => 6 (110), Bit reverse addressing mode take original value, do bit reverse, and use it as an address 6 mfu modes from found in desktop, account for 95% of the DSP addr. modes 07/02/2005 UAH- 53 Typical Operations Data Movement Arithmetic Shift Logical Control Subroutine Linkage System Synchronization Floating-point String Graphics load (from memory), store (to memory) mem-to-mem move, reg-to-reg move input (from IO device), push (to stack), pop (from stack), output (to IO device), integer (binary + decimal), Add, Subtract, Multiply, Divide shift left/right, rotate left/right not, and, or, xor, clear, set unconditional/conditional jump call/return OS call, virtual memory management test-and-set FP Add, Subtract, Multiply, Divide, Compare, SQRT String move, compare, search Pixel and vertex operations, compression/decomp. 07/02/2005 UAH- 54 Top ten 8086 instructions Rank Instruction % total execution 1 load 22% 2 conditional branch 20% 3 compare 16% 4 store 12% 5 add 8% 6 and 6% 7 sub 5% 8 move reg-reg 4% 9 call 1% 10 return 1% Total 96% Simple instructions dominate instruction frequency => support them 07/02/2005 UAH- 55 Operations for Media and Signal Processing Multimedia processing and limits of human perception use narrower data words (don t need 64b fp) => wide ALU s operate on several data items at the same time partition add e.g., perform four 16-bit adds on a 64-bit ALU SIMD Single instruction Multiple Data or vector instructions (see Appendix F) Figure 2.17 (page 110) DSP processors algorithms often need saturating arithmetic if result too large to be represented, it is set to the largest representable number often need several rounding modes MAC (Multiply and Accumulate) instructions 07/02/2005 UAH- 56

Instructions for Control Flow Control flow instructions Conditional branches (75% int, 82% fp) Call/return (19% int, 8% fp) Jump (6% int, 10% fp) ing modes for control flows PC-relative for returns and indirect jumps the target is not known in compile time => specify a register which contains the target address Instructions for Control Flow (cont d) Methods for branch evaluation Condition Code CC (ARM, 80x86, PowerPC) tests special bits set by ALU instructions Condition register (Alpha, MIPS) tests arbitrary register Compare and branch (PA-RISC, VAX) compare is a part of branch Procedure invocation options do control transfer and possibly some state saving at least return address must be saved (in link register) compiler generate loads and stores to save the state Caller savings vs. callee savings 07/02/2005 UAH- 57 07/02/2005 UAH- 58 Encoding an Instruction Set Instruction set architect must choose how to represent instructions in machine code Operation is specified in one field called Opcode Each operand is specified by a separate specifier (tells which addressing modes is used) Balance among Many registers and addressing modes adds to richness Many registers and addressing modes increase code size Lengths of code objects should "match" architecture; e.g., 16 or 32 bits Basic variations in encoding a) Variable (e.g. VAX) Operation & no. of operands specifier 1 field 1 b) Fixed (e.g. DLX, MIPS, PowerPC,) Operation field 1 field 2 field 3 c) Hybrid (e.g. IBM 360/70, Intel80x86) Operation Operation Operation specifier 1 specifier 1 specifier field 1 specifier 2 field 1 field field 2 specifier n field n 07/02/2005 UAH- 59 07/02/2005 UAH- 60

Summary of Instruction Formats If code size is most important, use variable length instructions If performance is over most important, use fixed length instructions Reduced code size in RISCs hybrid version with both 16-bit and 32-bit ins. narrow instructions support fewer operations, smaller address and immediate fields, fewer registers, and 2-address format ARM Thumb, MIPS MIPS16 (Appendix C) IBM: compressed code is kept in main memory, ROMs, disk caches keep decompressed code Role of Compilers Structure of recent compilers 1) Front-end transform language to common intermediate form language dependent, machine independent 2) High-level optimizations e.g., loop transformations, procedure inlining, somewhat language dependent, machine independent 3) Global optimizer global and local optimizations, register allocation small language dependencies, somewhat machine dependencies 4) Code generator instruction selection, machine dependent optimizations language independent, highly machine dependent 07/02/2005 UAH- 61 07/02/2005 UAH- 62 Compiler Optimizations 1) High-level optimizations done on the source 2) Local optimizations optimize code within a basic block 3) Global optimizations extend local optimizations across branches (loops) 4) Register allocation associates registers with operands 5) Processor-dependent optimizations take advantage of specific architectural knowledge 07/02/2005 UAH- 63