Chapter 8. Chapter 8:: Topics. Introduction. Processor Memory Gap

Similar documents
Chapter 8. Digital Design and Computer Architecture, 2 nd Edition. David Money Harris and Sarah L. Harris. Chapter 8 <1>

Sarah L. Harris and David Money Harris. Digital Design and Computer Architecture: ARM Edition Chapter 8 <1>

Chapter 8 :: Topics. Chapter 8 :: Memory Systems. Introduction Memory System Performance Analysis Caches Virtual Memory Memory-Mapped I/O Summary

Digital Logic & Computer Design CS Professor Dan Moldovan Spring Copyright 2007 Elsevier 8-<1>


Chapter 3. SEQUENTIAL Logic Design. Digital Design and Computer Architecture, 2 nd Edition. David Money Harris and Sarah L. Harris.

Chapter 8 Memory Hierarchy and Cache Memory

Direct Mapped Cache Hardware. Direct Mapped Cache. Direct Mapped Cache Performance. Direct Mapped Cache Performance. Miss Rate = 3/15 = 20%

EN1640: Design of Computing Systems Topic 06: Memory System

EN1640: Design of Computing Systems Topic 06: Memory System

Chapter 8 Virtual Memory

CSEE 3827: Fundamentals of Computer Systems, Spring Caches

Fundamentals of Computer Systems

Fundamentals of Computer Systems

CPU issues address (and data for write) Memory returns data (or acknowledgment for write)

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

CSE 431 Computer Architecture Fall Chapter 5A: Exploiting the Memory Hierarchy, Part 1

Memory Hierarchy Y. K. Malaiya

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS3350B Computer Architecture

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

CS161 Design and Architecture of Computer Systems. Cache $$$$$

Caches. Han Wang CS 3410, Spring 2012 Computer Science Cornell University. See P&H 5.1, 5.2 (except writes)

COMPUTER ORGANIZATION AND DESIGN

The Memory Hierarchy 10/25/16

The Memory Hierarchy. Cache, Main Memory, and Virtual Memory (Part 2)

CS 61C: Great Ideas in Computer Architecture. Direct Mapped Caches

Memory Technology. Chapter 5. Principle of Locality. Chapter 5 Large and Fast: Exploiting Memory Hierarchy 1

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

Advanced Memory Organizations

ECE232: Hardware Organization and Design

Donn Morrison Department of Computer Science. TDT4255 Memory hierarchies

Computer Architecture Memory hierarchies and caches

EE 4683/5683: COMPUTER ARCHITECTURE

Chapter 5A. Large and Fast: Exploiting Memory Hierarchy

LECTURE 4: LARGE AND FAST: EXPLOITING MEMORY HIERARCHY

Virtual Memory. Reading. Sections 5.4, 5.5, 5.6, 5.8, 5.10 (2) Lecture notes from MKP and S. Yalamanchili

Memory Hierarchy. Mehran Rezaei

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

EITF20: Computer Architecture Part 5.1.1: Virtual Memory

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Recap: Machine Organization

14:332:331. Week 13 Basics of Cache

Reducing Hit Times. Critical Influence on cycle-time or CPI. small is always faster and can be put on chip

CS 61C: Great Ideas in Computer Architecture. The Memory Hierarchy, Fully Associative Caches

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Review : Pipelining. Memory Hierarchy

Memory Technology. Caches 1. Static RAM (SRAM) Dynamic RAM (DRAM) Magnetic disk. Ideal memory. 0.5ns 2.5ns, $2000 $5000 per GB

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 7: Memory Organization Part II

Memory Hierarchies &

Virtual Memory. CS 3410 Computer System Organization & Programming

Locality. Cache. Direct Mapped Cache. Direct Mapped Cache

Transistor: Digital Building Blocks

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

CSE502: Computer Architecture CSE 502: Computer Architecture

CENG 3420 Computer Organization and Design. Lecture 08: Memory - I. Bei Yu

The Memory Hierarchy & Cache Review of Memory Hierarchy & Cache Basics (from 350):

and data combined) is equal to 7% of the number of instructions. Miss Rate with Second- Level Cache, Direct- Mapped Speed

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

Basic Memory Hierarchy Principles. Appendix C (Not all will be covered by the lecture; studying the textbook is recommended!)

Memory Hierarchy: Caches, Virtual Memory

Chapter Seven Morgan Kaufmann Publishers

CHAPTER 6 Memory. CMPS375 Class Notes (Chap06) Page 1 / 20 Dr. Kuo-pao Yang

EN2910A: Advanced Computer Architecture Topic 02: Review of classical concepts

Cray XE6 Performance Workshop

Virtual Memory. Adapted from instructor s supplementary material from Computer. Patterson & Hennessy, 2008, MK]

ECE 4750 Computer Architecture, Fall 2017 T03 Fundamental Memory Concepts

Lecture 12. Memory Design & Caches, part 2. Christos Kozyrakis Stanford University

Outline GPIO SPI UART. Ref. PIC Family Reference Manual:

Computer Architecture Computer Science & Engineering. Chapter 5. Memory Hierachy BK TP.HCM

ECE468 Computer Organization and Architecture. Virtual Memory

Caches and Memory Hierarchy: Review. UCSB CS240A, Winter 2016

The Memory Hierarchy & Cache

Virtual Memory. Daniel Sanchez Computer Science & Artificial Intelligence Lab M.I.T. April 12, 2018 L16-1

Virtual Memory. Patterson & Hennessey Chapter 5 ELEC 5200/6200 1

Caches and Memory Hierarchy: Review. UCSB CS240A, Fall 2017

Changelog. Virtual Memory (2) exercise: 64-bit system. exercise: 64-bit system

Background. Memory Hierarchies. Register File. Background. Forecast Memory (B5) Motivation for memory hierarchy Cache ECC Virtual memory.

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 2. Memory Hierarchy Design. Copyright 2012, Elsevier Inc. All rights reserved.

ECE4680 Computer Organization and Architecture. Virtual Memory

Virtual Memory: From Address Translation to Demand Paging

Memory Management! Goals of this Lecture!

Chapter 5. Large and Fast: Exploiting Memory Hierarchy

Memory Hierarchy. ENG3380 Computer Organization and Architecture Cache Memory Part II. Topics. References. Memory Hierarchy

Adapted from instructor s supplementary material from Computer. Patterson & Hennessy, 2008, MK]

The University of Adelaide, School of Computer Science 13 September 2018

LECTURE 10: Improving Memory Access: Direct and Spatial caches

Review: Performance Latency vs. Throughput. Time (seconds/program) is performance measure Instructions Clock cycles Seconds.

John Wawrzynek & Nick Weaver

Memory Hierarchy. Goal: Fast, unlimited storage at a reasonable cost per bit.

CS 31: Intro to Systems Caching. Kevin Webb Swarthmore College March 24, 2015

Virtual Memory - Objectives

Computer Architecture and System Software Lecture 09: Memory Hierarchy. Instructor: Rob Bergen Applied Computer Science University of Winnipeg

Spring 2016 :: CSE 502 Computer Architecture. Caches. Nima Honarmand

CSC Memory System. A. A Hierarchy and Driving Forces

Transcription:

Chapter 8 Chapter 8:: Topics Digital Design and Computer Architecture, 2 nd Edition David Money Harris and Sarah L. Harris Introduction System Performance Analysis Caches Virtual Mapped I/O Summary Chapter 8 <1> Chapter 8 <2> Introduction Computer performance depends on: Processor performance system performance Interface Processor Gap In prior chapters, assumed access memory in 1 clock cycle but hasn t been true since the 198 s Processor MemWrite Write WE Read Chapter 8 <3> Chapter 8 <4> 1

System Challenge Hierarchy Make memory system appear as fast as processor Use hierarchy of memories Ideal memory: Fast Cheap (inexpensive) Large (capacity) Speed Cache Main Virtual Capacity Technology Price / GB SRAM $1, 1 Access Time (ns) DRAM $1 1-5 SSD $1 1, HDD $.1 1,, Bandwidth (GB/s) 25+ 1.5.1 But can only choose two! Chapter 8 <5> Chapter 8 <6> Locality Exploit locality to make memory accesses fast Temporal Locality: Locality in time If data used recently, likely to use it again soon How to exploit: keep recently accessed data in higher levels of memory hierarchy Spatial Locality: Locality in space If data used recently, likely to use nearby data soon How to exploit: when access data, bring nearby data into higher levels of memory hierarchy too Chapter 8 <7> Performance Hit: data found in that level of memory hierarchy Miss: data not found (must go to next level) Hit Rate = # hits / # memory accesses = 1 Miss Rate Miss Rate = # misses / # memory accesses = 1 Hit Rate Average memory access time (AMAT): average time for processor to access data AMAT = t cache + MR cache [t MM + MR MM (t VM )] Chapter 8 <8> 2

Performance Example 1 A program has 2, loads and stores 1,25 of these data values in cache Rest supplied by other levels of memory hierarchy What are the hit and miss rates for the cache? Performance Example 1 A program has 2, loads and stores 1,25 of these data values in cache Rest supplied by other levels of memory hierarchy What are the hit and miss rates for the cache? Hit Rate = 125/2 =.625 Miss Rate = 75/2 =.375 = 1 Hit Rate Chapter 8 <9> Chapter 8 <1> Performance Example 2 Suppose processor has 2 levels of hierarchy: cache and main memory t cache = 1 cycle, t MM = 1 cycles What is the AMAT of the program from Example 1? Performance Example 2 Suppose processor has 2 levels of hierarchy: cache and main memory t cache = 1 cycle, t MM = 1 cycles What is the AMAT of the program from Example 1? AMAT = t cache + MR cache (t MM ) = [1 +.375(1)] cycles = 38.5 cycles Chapter 8 <11> Chapter 8 <12> 3

Gene Amdahl, 1922 Amdahl s Law: the effort spent increasing the performance of a subsystem is wasted unless the subsystem affects a large percentage of overall performance Co-founded 3 companies, including one called Amdahl Corporation in 197 Chapter 8 <13> Cache Highest level in memory hierarchy Fast (typically ~ 1 cycle access time) Ideally supplies most data to processor Usually holds most recently accessed data Chapter 8 <14> Cache Design Questions What data is held in the cache? How is data found? What data is replaced? Focus on data loads, but stores follow same principles What data is held in the cache? Ideally, cache anticipates needed data and puts it in cache But impossible to predict future Use past to predict future temporal and spatial locality: Temporal locality: copy newly accessed data into cache Spatial locality: copy neighboring data into cache too Chapter 8 <15> Chapter 8 <16> 4

Cache Terminology How is data found? Capacity (C): number of data bytes in cache Block size (b): bytes of data brought into cache at once Number of blocks (B = C/b): number of blocks in cache: B = C/b Degree of associativity (N): number of blocks in a set Number of sets (S = B/N): each memory address maps to exactly one cache set Chapter 8 <17> Cache organized into S sets Each memory address maps to exactly one set Caches categorized by # of blocks in a set: Direct mapped: 1 block per set N way set associative: N blocks per set Fully associative: all cache blocks in 1 set Examine each organization for a cache with: Capacity (C = 8 words) Block size (b = 1 word) So, number of blocks (B = 8) Chapter 8 <18> Example Cache Parameters Direct Mapped Cache C = 8 words (capacity) b = 1 word (block size) So, B = 8 (# of blocks) 11...111111 11...11111 11...11111 11...1111 11...11111 11...1111 11...1111 11...111 mem[xff...fc] mem[xff...f8] mem[xff...f4] mem[xff...f] mem[xff...ec] mem[xff...e8] mem[xff...e4] mem[xff...e] Ridiculously small, but will illustrate organizations...11...1...111...11...11...1...11...1...1... mem[x...24] mem[x..2] mem[x..1c] mem[x...18] mem[x...14] mem[x...1] mem[x...c] mem[x...8] mem[x...4] mem[x...] Set Number 7 (111) 6 (11) 5 (11) 4 (1) 3 (11) 2 (1) 1 (1) () Chapter 8 <19> 2 3 Word Main 2 3 Word Cache Chapter 8 <2> 5

Direct Mapped Cache Hardware Direct Mapped Cache Performance Byte Set Offset 27 3 V = 27 32 8-entry x (1+27+32)-bit SRAM # MIPS assembly code... addi $t, $, 5 loop: beq $t, $, done lw $t1, x4($) lw $t2, xc($) lw $t3, x8($) addi $t, $t, -1 j loop done: Byte Set Offset 1 3 V 1... mem[x...c] 1... mem[x...8] 1... mem[x...4] Miss Rate =? Set 7 (111) Set 6 (11) Set 5 (11) Set 4 (1) Set 3 (11) Set 2 (1) Set 1 (1) Set () Hit Chapter 8 <21> Chapter 8 <22> Direct Mapped Cache Performance Direct Mapped Cache: Conflict # MIPS assembly code... addi $t, $, 5 loop: beq $t, $, done lw $t1, x4($) lw $t2, xc($) lw $t3, x8($) addi $t, $t, -1 j loop done: Byte Set Offset 1 3 V 1... mem[x...c] 1... mem[x...8] 1... mem[x...4] Miss Rate = 3/15 = 2% Temporal Locality Compulsory Misses Set 7 (111) Set 6 (11) Set 5 (11) Set 4 (1) Set 3 (11) Set 2 (1) Set 1 (1) Set () # MIPS assembly code...1 addi $t, $, 5 loop: beq $t, $, done lw $t1, x4($) lw $t2, x24($) addi $t, $t, -1 j loop done: Byte Set Offset 1 3 V 1... Miss Rate =? mem[x...4] mem[x...24] Set 7 (111) Set 6 (11) Set 5 (11) Set 4 (1) Set 3 (11) Set 2 (1) Set 1 (1) Set () Chapter 8 <23> Chapter 8 <24> 6

Direct Mapped Cache: Conflict N Way Set Associative Cache # MIPS assembly code...1 addi $t, $, 5 loop: beq $t, $, done lw $t1, x4($) lw $t2, x24($) addi $t, $t, -1 j loop done: Byte Set Offset 1 3 V 1... mem[x...4] mem[x...24] Miss Rate = 1/1 = 1% Conflict Misses Set 7 (111) Set 6 (11) Set 5 (11) Set 4 (1) Set 3 (11) Set 2 (1) Set 1 (1) Set () Hit 1 = Set 28 2 Hit Byte Offset V Hit = Way 1 Way 28 32 28 32 1 V 32 Hit 1 Chapter 8 <25> Chapter 8 <26> N Way Set Associative Performance N Way Set Associative Performance # MIPS assembly code # MIPS assembly code addi $t, $, 5 loop: beq $t, $, done lw $t1, x4($) lw $t2, x24($) addi $t, $t, -1 j loop done: Way 1 Way V V Miss Rate =? Set 3 Set 2 Set 1 Set addi $t, $, 5 loop: beq $t, $, done lw $t1, x4($) lw $t2, x24($) addi $t, $t, -1 j loop done: V Way 1 Way V 1...1 mem[x...24] 1... mem[x...4] Miss Rate = 2/1 = 2% Associativity reduces conflict misses Set 3 Set 2 Set 1 Set Chapter 8 <27> Chapter 8 <28> 7

Fully Associative Cache V V V V V V V V Reduces conflict misses Expensive to build Spatial Locality? Increase block size: Block size, b = 4words C = 8 words Direct mapped (1 block per set) Number of blocks, B = 2 (C/b = 8/4 = 2) Block Byte Set Offset Offset 27 2 V 27 11 1 32 32 32 32 1 Set 1 Set = 32 Chapter 8 <29> Hit Chapter 8 <3> Cache with Larger Block Size Direct Mapped Cache Performance Block Byte Set Offset Offset 27 2 V 27 32 32 32 32 Set 1 Set addi $t, $, 5 loop: beq $t, $, done lw $t1, x4($) lw $t2, xc($) lw $t3, x8($) addi $t, $t, -1 j loop done: Miss Rate =? 1 1 11 = 32 Hit Chapter 8 <31> Chapter 8 <32> 8

Direct Mapped Cache Performance addi $t, $, 5 loop: beq $t, $, done lw $t1, x4($) lw $t2, xc($) lw $t3, x8($) addi $t, $t, -1 j loop done: Block Byte Set Offset... 11 Offset 27 2 V... 1 mem[x...c] mem[x...8] mem[x...4] mem[x...] 27 32 32 32 32 = 11 Miss Rate = 1/15 = 6.67% Larger blocks reduce compulsory misses through spatial locality 1 32 1 Set 1 Set Cache Organization Recap Capacity: C Block size: b Number of blocks in cache: B = C/b Number of blocks in a set: N Number of sets: S = B/N Number of Ways Number of Sets Organization (N) (S = B/N) Direct Mapped 1 B N-Way Set Associative 1 < N < B Fully Associative B 1 B / N Hit Chapter 8 <33> Chapter 8 <34> Capacity Misses Cache is too small to hold all data of interest at once If cache full: program accesses data X & evicts data Y Capacity miss when access Y again How to choose Y to minimize chance of needing it again? Least recently used (LRU) replacement: the least recently used block in a set evicted Types of Misses Compulsory: first time data accessed Capacity: cache too small to hold all data of interest Conflict: data of interest maps to same location in cache Miss penalty: time it takes to retrieve a block from lower level of hierarchy Chapter 8 <35> Chapter 8 <36> 9

LRU Replacement # MIPS assembly lw $t, x4($) lw $t1, x24($) lw $t2, x54($) Way 1 Way LRU Replacement # MIPS assembly lw $t, x4($) lw $t1, x24($) lw $t2, x54($) Way 1 Way V U V Set 3 (11) Set 2 (1) Set 1 (1) Set () (a) V U V 1...1 mem[x...24] 1... mem[x...4] Way 1 Way Set 3 (11) Set 2 (1) Set 1 (1) Set () Chapter 8 <37> (b) V U V 1 1...1 mem[x...24] 1...11 mem[x...54] Chapter 8 <38> Set 3 (11) Set 2 (1) Set 1 (1) Set () Cache Summary What data is held in the cache? Recently used data (temporal locality) Nearby data (spatial locality) How is data found? Set is determined by address of data Word within block also determined by address In associative caches, data could be in one of several ways What data is replaced? Least recently used way in the set Chapter 8 <39> Miss Rate Trends Bigger caches reduce capacity misses Greater associativity reduces conflict misses Adapted from Patterson & Hennessy, Computer Architecture: A Quantitative Approach, 211 Chapter 8 <4> 1

Miss Rate Trends Multilevel Caches Bigger blocks reduce compulsory misses Bigger blocks increase conflict misses Chapter 8 <41> Larger caches have lower miss rates, longer access times Expand memory hierarchy to multiple levels of caches Level 1: small and fast (e.g. 16 KB, 1 cycle) Level 2: larger and slower (e.g. 256 KB, 2 6 cycles) Most modern PCs have L1, L2, and L3 cache Chapter 8 <42> Intel Pentium III Die Virtual Gives the illusion of bigger memory Main memory (DRAM) acts as cache for hard disk Chapter 8 <43> Chapter 8 <44> 11

Hierarchy Hard Disk Cache Technology Price / GB SRAM $1, 1 Access Time (ns) Bandwidth (GB/s) 25+ Magnetic Disks Speed Main DRAM $1 1-5 1 Virtual SSD $1 1, HDD $.1 1,,.5.1 Read/Write Head Capacity Physical : DRAM (Main ) Virtual : Hard drive Slow, Large, Cheap Takes milliseconds to seek correct location on disk Chapter 8 <45> Chapter 8 <46> Virtual Cache/Virtual Analogues Virtual addresses Programs use virtual addresses Entire virtual address space stored on a hard drive Subset of virtual address data in DRAM CPU translates virtual addresses into physical addresses (DRAM addresses) not in DRAM fetched from hard drive Protection Each program has own virtual to physical mapping Two programs can use same virtual address for different data Programs don t need to be aware others are running One program (or virus) can t corrupt memory used by another Chapter 8 <47> Cache Virtual Block Page Block Size Page Size Block Offset Page Offset Miss Page Fault Virtual Page Number Physical memory acts as cache for virtual memory Chapter 8 <48> 12

Virtual Definitions Virtual & Physical es Page size: amount of memory transferred from hard disk to DRAM at once translation: determining physical address from virtual address Page table: lookup table used to translate virtual addresses to physical addresses Most accesses hit in physical memory But programs have the large capacity of virtual memory Chapter 8 <49> Chapter 8 <5> Translation Virtual Example System: Virtual memory size: 2 GB = 2 31 bytes Physical memory size: 128 MB = 2 27 bytes Page size: 4 KB = 2 12 bytes Chapter 8 <51> Chapter 8 <52> 13

Virtual Example System: Virtual memory size: 2 GB = 2 31 bytes Physical memory size: 128 MB = 2 27 bytes Page size: 4 KB = 2 12 bytes Organization: Virtual address: 31 bits Physical address: 27 bits Page offset: 12 bits # Virtual pages = 2 31 /2 12 = 2 19 (VPN = 19 bits) # Physical pages = 2 27 /2 12 = 2 15 (PPN = 15 bits) Virtual Example 19 bit virtual page numbers 15 bit physical page numbers Chapter 8 <53> Chapter 8 <54> Virtual Example What is the physical address of virtual address x247c? Virtual Example What is the physical address of virtual address x247c? VPN = x2 VPN x2 maps to PPN x7fff 12 bit page offset: x47c Physical address = x7fff47c Chapter 8 <55> Chapter 8 <56> 14

How to perform translation? Page table Entry for each virtual page Entry fields: Valid bit: 1 if page in physical memory Physical page number: where the page is located Chapter 8 <57> Page Table Example Virtual VPN is index into page table Virtual Page Page Number Offset x2 47C 19 12 Physical V Page Number 1 x 1 x7ffe Page Table 1 x1 1 x7fff 15 12 Hit Physical x7fff 47C Chapter 8 <58> Page Table Example 1 Page Table Example 1 What is the physical address of virtual address x5f2? Physical V Page Number 1 x 1 x7ffe 1 x1 1 x7fff Page Table What is the physical address of virtual address x5f2? VPN = 5 Entry 5 in page table VPN 5 => physical page 1 Physical address: x1f2 Virtual Virtual Page Page Number Offset x5 F2 19 12 Physical V Page Number 1 x 1 x7ffe Page Table 1 x1 1 x7fff 15 12 Hit Physical x1 F2 Chapter 8 <59> Chapter 8 <6> 15

Page Table Example 2 Page Table Example 2 What is the physical address of virtual address x73e? Virtual Virtual Page Page Number Offset x7 3E 19 Physical V Page Number 1 x 1 x7ffe 1 x1 1 x7fff Page Table What is the physical address of virtual address x73e? VPN = 7 Entry 7 is invalid Virtual page must be paged into physical memory from disk Virtual Virtual Page Page Number Offset x7 3E 19 Physical V Page Number 1 x 1 x7ffe 1 x1 1 x7fff Page Table Hit 15 Hit 15 Chapter 8 <61> Chapter 8 <62> Page Table Challenges Page table is large usually located in physical memory Load/store requires 2 main memory accesses: one for translation (page table read) one to access data (after translation) Cuts memory performance in half Unless we get clever Translation Lookaside Buffer (TLB) Small cache of most recent translations Reduces # of memory accesses for most loads/stores from 2to 1 Chapter 8 <63> Chapter 8 <64> 16

TLB Page table accesses: high temporal locality Large page size, so consecutive loads/stores likely to access same page TLB Small: accessed in < 1 cycle Typically 16 512 entries Fully associative > 99 % hit rates typical Reduces # of memory accesses for most loads/stores from 2 to 1 Example 2 Entry TLB Virtual Virtual Page Page Number Offset x2 47C 19 12 Hit 1 = Hit Hit = Entry 1 19 15 19 15 1 Physical Entry Virtual Physical Virtual Physical V Page Number Page Number V Page Number Page Number 1 x7fffd x 1 x2 x7fff TLB 15 12 x7fff 47C Hit 1 Chapter 8 <65> Chapter 8 <66> Protection Multiple processes (programs) run at once Each process has its own page table Each process can use entire virtual address space A process can only access physical pages mapped in its own page table Virtual Summary Virtual memory increases capacity A subset of virtual pages in physical memory Page table maps virtual pages to physical pages address translation A TLB speeds up address translation Different page tables for different programs provides memory protection Chapter 8 <67> Chapter 8 <68> 17

Mapped I/O Processor accesses I/O devices just like memory (like keyboards, monitors, printers) Each I/O device assigned one or more address When that address is detected, data read/written to I/O device instead of memory A portion of the address space dedicated to I/O devices Mapped I/O Hardware Decoder: Looks at address to determine which device/memory communicates with the processor I/O Registers: Hold values written to the I/O devices Read Multiplexer: Selects between memory and I/O devices as source of data sent to the processor Chapter 8 <69> Chapter 8 <7> The Interface Mapped I/O Hardware Decoder Processor MemWrite Write WE Read Processor MemWrite Write WE2 WE1 WE WEM RDsel 1: EN I/O Device 1 1 1 Read EN I/O Device 2 Chapter 8 <71> Chapter 8 <72> 18

Mapped I/O Code Mapped I/O Code Suppose I/O Device 1 is assigned the address xfffffff4 Write the value 42 to I/O Device 1 Read value from I/O Device 1 and place in $t3 Write the value 42 to I/O Device 1 (xfffffff4) addi $t, $, 42 sw $t, xfff4($) Processor MemWrite Write WE1 = 1 WE2 Decoder WE WEM RDsel 1: EN I/O Device 1 Read 1 1 EN I/O Device 2 Chapter 8 <73> Chapter 8 <74> Mapped I/O Code Input/Output (I/O) Systems Read the value from I/O Device 1 and place in $t3 lw $t3, xfff4($) Processor MemWrite Write WE1 WE2 Decoder WE WEM RDsel 1: = 1 Embedded I/O Systems Toasters, LEDs, etc. PC I/O Systems EN I/O Device 1 1 1 Read EN I/O Device 2 Chapter 8 <75> Chapter 8 <76> 19

Embedded I/O Systems Example microcontroller: PIC32 microcontroller 32 bit MIPS processor low level peripherals include: serial ports timers A/D converters Digital I/O // C Code #include <p3xxxx.h> int main(void) { int switches; TRISD = xff; // RD[7:] outputs // RD[11:8] inputs while (1) { // read & mask switches, RD[11:8] switches = (PORTD >> 8) & xf; PORTD = switches; // display on LEDs } } PIC32 RD11 45 RD1 44 RD9 43 RD8 42 RD7 55 RD6 54 RD5 53 RD4 52 RD3 51 RD2 5 RD1 49 RD 46 Switches 3.3 V R = 1KΩ R = 33 Ω Chapter 8 <77> Chapter 8 <78> LEDs Serial I/O Example serial protocols SPI: Serial Peripheral Interface UART: Universal Asynchronous Receiver/Transmitter Also: I 2 C, USB, Ethernet, etc. SPI: Serial Peripheral Interface Master initiates communication to slave by sending pulses on SCK Master sends SDO (Serial Out) to slave, msb first Slave may send data (SDI) to master, msb first Master SCK SDO SDI Slave SCK SDI SDO (a) SCK SDO SDI bit 7 bit 6 bit 5 bit 4 bit 3 bit 2 bit 1 bit bit 7 bit 6 bit 5 bit 4 bit 3 bit 2 bit 1 bit Chapter 8 <79> (b) Chapter 8 <8> 2

UART: Universal Asynchronous Rx/Tx Configuration: start bit (), 7 8 data bits, parity bit (optional), 1+ stop bits (1) data rate: 3, 12, 24, 96, 1152 baud Line idles HIGH (1) Common configuration: 8 data bits, no parity, 1 stop bit, 96 baud (a) (b) DTE TX RX DCE TX RX 1/96 sec Idle Start bit bit 1 bit 2 bit 3 bit 4 bit 5 bit 6 bit 7 Stop Timers // Create specified ms/us of delay using built-in timer #include <P32xxxx.h> void delaymicros(int micros) { if (micros > 1) { // avoid timer overflow delaymicros(1); delaymicros(micros-1); } else if (micros > 6){ TMR1 = ; // reset timer to T1CONbits.ON = 1; // turn timer on PR1 = (micros-6)*2; // 2 clocks per microsecond // Function has overhead of ~6 us IFSbits.T1IF = ; // clear overflow flag while (!IFSbits.T1IF); // wait until overflow flag set } } void delaymillis(int millis) { while (millis--) delaymicros(1); // repeatedly delay 1 ms } // until done Chapter 8 <81> Chapter 8 <82> Analog I/O Pulse Width Modulation (PWM) Needed to interface with outside world Analog input: Analog to digital (A/D) conversion Often included in microcontroller N bit: converts analog input from V ref V ref+ to 2 N 1 Analog output: Digital to analog (D/A) conversion Typically need external chip (e.g., AD558 or LTC1257) N bit: converts digital signal from 2 N 1 to V ref V ref+ Pulse width modulation Average value proportional to duty cycle Pulse width Period OC1/RD 46 Pulse width Duty cycle = Period Add high pass filter on output to deliver average value PIC 1KΩ V out.1 μf Chapter 8 <83> Chapter 8 <84> 21

Other Microcontroller Peripherals Personal Computer (PC) I/O Systems Examples Character LCD VGA monitor Bluetooth wireless Motors Chapter 8 <85> USB: Universal Serial Bus USB 1. released in 1996 standardized cables/software for peripherals PCI/PCIe: Peripheral Component Interconnect/PCI Express developed by Intel, widespread around 1994 32 bit parallel bus used for expansion cards (i.e., sound cards, video cards, etc.) DDR: double data rate memory Chapter 8 <86> Personal Computer (PC) I/O Systems TCP/IP: Transmission Control Protocol and Internet Protocol physical connection: Ethernet cable or Wi Fi SATA: hard drive interface Input/Output (sensors, actuators, microcontrollers, etc.) Acquisition Systems (DAQs) USB Links Chapter 8 <87> 22