Computer Architecture

Similar documents
Design of Digital Circuits Lecture 4: Mysteries in Comp Arch and Basics. Prof. Onur Mutlu ETH Zurich Spring March 2018

Computer Architecture Lecture 24: Memory Scheduling

15-740/ Computer Architecture Lecture 20: Main Memory II. Prof. Onur Mutlu Carnegie Mellon University

Computer Architecture

15-740/ Computer Architecture Lecture 1: Intro, Principles, Tradeoffs. Prof. Onur Mutlu Carnegie Mellon University

Computer Architecture: Memory Interference and QoS (Part I) Prof. Onur Mutlu Carnegie Mellon University

15-740/ Computer Architecture Lecture 19: Main Memory. Prof. Onur Mutlu Carnegie Mellon University

Computer Architecture: Main Memory (Part II) Prof. Onur Mutlu Carnegie Mellon University

Computer Architecture Lecture 22: Memory Controllers. Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 3/25/2015

Computer Architecture Lecture 2: Fundamental Concepts and ISA. Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 1/14/2014

Computer Architecture Lecture 2: Fundamental Concepts and ISA. Prof. Onur Mutlu Carnegie Mellon University Spring 2014, 1/15/2014

Staged Memory Scheduling

Memory Systems in the Many-Core Era: Some Challenges and Solution Directions. Onur Mutlu June 5, 2011 ISMM/MSPC

Computer Architecture Today (I)

Scalable Many-Core Memory Systems Topic 3: Memory Interference and QoS-Aware Memory Systems

18-447: Computer Architecture Lecture 25: Main Memory. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 4/3/2013

Computer Architecture: Multi-Core Processors: Why? Prof. Onur Mutlu Carnegie Mellon University

Intro to Computer Architecture, Spring 2012 Midterm Exam II. Name:

Thread Cluster Memory Scheduling: Exploiting Differences in Memory Access Behavior. Yoongu Kim Michael Papamichael Onur Mutlu Mor Harchol-Balter

Computer Architecture

Memory Systems in the Multi-Core Era Lecture 1: DRAM Basics and DRAM Scaling

Intro to Computer Architecture, Spring 2012 Midterm Exam II

Computer Architecture: Multi-Core Processors: Why? Onur Mutlu & Seth Copen Goldstein Carnegie Mellon University 9/11/13

15-740/ Computer Architecture Lecture 7: Pipelining. Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 9/26/2011

Designing High-Performance and Fair Shared Multi-Core Memory Systems: Two Approaches. Onur Mutlu March 23, 2010 GSRC

15-740/ Computer Architecture, Fall 2011 Midterm Exam II

EEM 486: Computer Architecture. Lecture 9. Memory

Computer Architecture Lecture 12: Out-of-Order Execution (Dynamic Instruction Scheduling)

15-740/ Computer Architecture Lecture 21: Superscalar Processing. Prof. Onur Mutlu Carnegie Mellon University

CSE502: Computer Architecture CSE 502: Computer Architecture

Improving DRAM Performance by Parallelizing Refreshes with Accesses

Reducing DRAM Latency at Low Cost by Exploiting Heterogeneity. Donghyuk Lee Carnegie Mellon University

Lecture 15: DRAM Main Memory Systems. Today: DRAM basics and innovations (Section 2.3)

Computer Architecture

Computer Architecture

Lecture: Memory Technology Innovations

Tiered-Latency DRAM: A Low Latency and A Low Cost DRAM Architecture

Computer Architecture. Lecture 8: Virtual Memory

Rethinking Memory System Design for Data-Intensive Computing. Onur Mutlu October 11-16, 2013

CS311 Lecture 21: SRAM/DRAM/FLASH

15-740/ Computer Architecture Lecture 4: Pipelining. Prof. Onur Mutlu Carnegie Mellon University

CS252 S05. Main memory management. Memory hardware. The scale of things. Memory hardware (cont.) Bottleneck

RAIDR: Retention-Aware Intelligent DRAM Refresh

Lecture-14 (Memory Hierarchy) CS422-Spring

Design-Induced Latency Variation in Modern DRAM Chips:

Lecture 1: Introduction

ECE 152 Introduction to Computer Architecture

Computer Architecture: Out-of-Order Execution II. Prof. Onur Mutlu Carnegie Mellon University

VLSID KOLKATA, INDIA January 4-8, 2016

A Row Buffer Locality-Aware Caching Policy for Hybrid Memories. HanBin Yoon Justin Meza Rachata Ausavarungnirun Rachael Harding Onur Mutlu

Computer Architecture Lecture 21: Main Memory. Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 3/23/2015

18-447: Computer Architecture Lecture 17: Memory Hierarchy and Caches. Prof. Onur Mutlu Carnegie Mellon University Spring 2012, 3/26/2012

Computer Architecture

Evaluating STT-RAM as an Energy-Efficient Main Memory Alternative

CSE502: Computer Architecture CSE 502: Computer Architecture

Computer Architecture Lecture 15: Load/Store Handling and Data Flow. Prof. Onur Mutlu Carnegie Mellon University Spring 2014, 2/21/2014

Computer Architecture Lecture 14: Out-of-Order Execution. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 2/18/2013

DRAM Main Memory. Dual Inline Memory Module (DIMM)

CS650 Computer Architecture. Lecture 9 Memory Hierarchy - Main Memory

Chapter 5B. Large and Fast: Exploiting Memory Hierarchy

15-740/ Computer Architecture Lecture 10: Out-of-Order Execution. Prof. Onur Mutlu Carnegie Mellon University Fall 2011, 10/3/2011

Performance Characterization, Prediction, and Optimization for Heterogeneous Systems with Multi-Level Memory Interference

Computer Architecture!

Memory Systems and Compiler Support for MPSoC Architectures. Mahmut Kandemir and Nikil Dutt. Cap. 9

Exploiting Inter-Warp Heterogeneity to Improve GPGPU Performance

Spring 2018 :: CSE 502. Main Memory & DRAM. Nima Honarmand

Memory Hierarchies. Instructor: Dmitri A. Gusev. Fall Lecture 10, October 8, CS 502: Computers and Communications Technology

Marching Memory マーチングメモリ. UCAS-6 6 > Stanford > Imperial > Verify 中村維男 Based on Patent Application by Tadao Nakamura and Michael J.

CPS104 Computer Organization Lecture 1

Row Buffer Locality Aware Caching Policies for Hybrid Memories. HanBin Yoon Justin Meza Rachata Ausavarungnirun Rachael Harding Onur Mutlu

EN1640: Design of Computing Systems Topic 06: Memory System

Understanding Reduced-Voltage Operation in Modern DRAM Devices

2 Improved Direct-Mapped Cache Performance by the Addition of a Small Fully-Associative Cache and Prefetch Buffers [1]

Lecture 5: Refresh, Chipkill. Topics: refresh basics and innovations, error correction

IN modern systems, the high latency of accessing largecapacity

Lecture: DRAM Main Memory. Topics: virtual memory wrap-up, DRAM intro and basics (Section 2.3)

Chapter Seven. Memories: Review. Exploiting Memory Hierarchy CACHE MEMORY AND VIRTUAL MEMORY

Final Lecture. A few minutes to wrap up and add some perspective

DRAM Tutorial Lecture. Vivek Seshadri

Fall 2012 Parallel Computer Architecture Lecture 16: Speculation II. Prof. Onur Mutlu Carnegie Mellon University 10/12/2012

Memories: Memory Technology

CMU Introduction to Computer Architecture, Spring 2014 HW 5: Virtual Memory, Caching and Main Memory

Memory. Lecture 22 CS301

Chapter 2. OS Overview

CPS104 Computer Organization Lecture 1. CPS104: Computer Organization. Meat of the Course. Robert Wagner

Thomas Moscibroda Microsoft Research. Onur Mutlu CMU

Lecture: DRAM Main Memory. Topics: virtual memory wrap-up, DRAM intro and basics (Section 2.3)

Computer Architecture Lecture 19: Memory Hierarchy and Caches. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 3/19/2014

Computer Architecture Lecture 27: Multiprocessors. Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 4/6/2015

What is a parallel computer?

Memory Scaling: A Systems Architecture Perspective. Onur Mutlu August 6, 2013 MemCon 2013

Computer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture. Lecture 9: Multiprocessors

The University of Adelaide, School of Computer Science 13 September 2018

An introduction to SDRAM and memory controllers. 5kk73

15-740/ Computer Architecture Lecture 22: Superscalar Processing (II) Prof. Onur Mutlu Carnegie Mellon University

LECTURE 5: MEMORY HIERARCHY DESIGN

18-447: Computer Architecture Lecture 22: Memory Hierarchy and Caches. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 3/27/2013

18-447: Computer Architecture Lecture 30B: Multiprocessors. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 4/22/2013

Pipelining, Instruction Level Parallelism and Memory in Processors. Advanced Topics ICOM 4215 Computer Architecture and Organization Fall 2010

Addressing the Memory Wall

Transcription:

Computer Architecture Lecture 1: Introduction and Basics Dr. Ahmed Sallam Suez Canal University Spring 2016 Based on original slides by Prof. Onur Mutlu

I Hope You Are Here for This Programming How does an assembly program end up executing as digital logic? What happens in-between? How is a computer designed using logic gates and wires to satisfy specific goals? Digital Logic C as a model of computation Programmer s view of a computer system works Architect/microarchitect s view: How to design a computer that meets system design goals. Choices critically affect both the SW programmer and the HW designer HW designer s view of a computer system works Digital logic as a model of computation 2

Levels of Transformation The purpose of computing is insight (Richard Hamming) We gain and generate insight by solving problems How do we ensure problems are solved by electrons? Problem Algorithm Program/Language Runtime System (VM, OS) ISA (Architecture) Microarchitecture Logic Circuits Electrons 3

The Power of Abstraction Levels of transformation create abstractions Abstraction: A higher level only needs to know about the interface to the lower level, not how the lower level is implemented E.g., high-level language programmer does not really need to know what the ISA is and how a computer executes instructions Abstraction improves productivity No need to worry about decisions made in underlying levels E.g., programming in Java vs. C vs. assembly vs. binary vs. by specifying control signals of each transistor every cycle Then, why would you want to know what goes on underneath or above? 4

Crossing the Abstraction Layers As long as everything goes well, not knowing what happens in the underlying level (or above) is not a problem. What if The program you wrote is running slow? The program you wrote does not run correctly? The program you wrote consumes too much energy? What if The hardware you designed is too hard to program? The hardware you designed is too slow because it does not provide the right primitives to the software? What if You want to design a much more efficient and higher performance system? 5

DRAM BANKS 6 An Example: Multi-Core Systems Multi-Core Chip DRAM INTERFACE CORE 1 L2 CACHE 1 L2 CACHE 0 CORE 0 DRAM MEMORY CONTROLLER CORE 2 CORE 3 L2 CACHE 3 L2 CACHE 2 SHARED L3 CACHE *Die photo credit: AMD Barcelona

Unexpected Slowdowns in Multi-Core High priority Memory Performance Hog Low priority (Core 0) (Core 1) Moscibroda and Mutlu, Memory performance attacks: Denial of memory service in multi-core systems, USENIX Security 2007. 7

A Question or Two Can you figure out why there is a disparity in slowdowns if you do not know how the processor executes the programs? Can you fix the problem without knowing what is happening underneath? 8

Why the Disparity in Slowdowns? CORE 1 CORE 2 matlab gcc Multi-Core Chip L2 CACHE L2 CACHE INTERCONNECT DRAM MEMORY CONTROLLER Shared DRAM Memory System DRAM DRAM DRAM DRAM Bank 0 Bank 1 Bank 2 Bank 3 9

DRAM Bank Operation Access Address: (Row 0, Column 0) (Row 0, Column 1) (Row 0, Column 85) (Row 1, Column 0) Row address 01 Row decoder Columns Rows Empty Row 01 Row Buffer CONFLICT HIT! Column address 01 85 Column mux Data 10

DRAM Controllers A row-conflict memory access takes significantly longer than a row-hit access Current controllers take advantage of the row buffer Commonly used scheduling policy (FR-FCFS) [Rixner 2000]* (1) Row-hit first: Service row-hit memory accesses first (2) Oldest-first: Then service older accesses first This scheduling policy aims to maximize DRAM throughput *Rixner et al., Memory Access Scheduling, ISCA 2000. *Zuravleff and Robinson, Controller for a synchronous DRAM, US Patent 5,630,096, May 1997. 11

Why the Disparity in Slowdowns? CORE matlab1 CORE gcc 2 Multi-Core Chip unfairness L2 CACHE L2 CACHE INTERCONNECT DRAM MEMORY CONTROLLER Shared DRAM Memory System DRAM DRAM DRAM DRAM Bank 0 Bank 1 Bank 2 Bank 3 12

A Memory Performance Hog // initialize large arrays A, B for (j=0; j<n; j++) { index = j*linesize; streaming A[index] = B[index]; } // initialize large arrays A, B for (j=0; j<n; j++) { index = rand(); random A[index] = B[index]; } STREAM - Sequential memory access - Very high row buffer locality (96% hit rate) - Memory intensive RANDOM - Random memory access - Very low row buffer locality (3% hit rate) - Similarly memory intensive Moscibroda and Mutlu, Memory Performance Attacks, USENIX Security 2007. 13

The Problem Multiple threads share the DRAM controller DRAM controllers designed to maximize DRAM throughput DRAM scheduling policies are thread-unfair Row-hit first: unfairly prioritizes threads with high row buffer locality Threads that keep on accessing the same row Oldest-first: unfairly prioritizes memory-intensive threads DRAM controller vulnerable to denial of service attacks Can write programs to exploit unfairness 14

What Does the Memory Hog Do? T0: Row 0 T0: T1: Row 05 T1: T0: Row 111 0 T1: T0: Row 16 0 Row decoder Memory Request Buffer Row 0 Row Buffer T0: STREAM T1: RANDOM Column mux Row size: 8KB, cache block size: 64B 128 (8KB/64B) requests of T0 serviced before T1 Data Moscibroda and Mutlu, Memory Performance Attacks, USENIX Security 2007. 15

Now That We Know What Happens Underneath How would you solve the problem? What is the right place to solve the problem? Programmer? System software? Compiler? Hardware (Memory controller)? Hardware (DRAM)? Circuits? Problem Algorithm Program/Language Runtime System (VM, OS) ISA (Architecture) Microarchitecture Two other goals of this course: Enable you to think critically Enable you to think broadly Logic Circuits Electrons 16

If You Are Interested Further Readings Onur Mutlu and Thomas Moscibroda, "Stall-Time Fair Memory Access Scheduling for Chip Multiprocessors" Proceedings of the 40th International Symposium on Microarchitecture (MICRO), pages 146-158, Chicago, IL, December 2007. Slides (ppt) Sai Prashanth Muralidhara, Lavanya Subramanian, Onur Mutlu, Mahmut Kandemir, and Thomas Moscibroda, "Reducing Memory Interference in Multicore Systems via Application-Aware Memory Channel Partitioning" Proceedings of the 44th International Symposium on Microarchitecture (MICRO), Porto Alegre, Brazil, December 2011. Slides (pptx) 17

Takeaway Breaking the abstraction layers (between components and transformation hierarchy levels) and knowing what is underneath enables you to solve problems 18

Another Example DRAM Refresh 19

DRAM BANKS 20 DRAM INTERFACE DRAM in the System Multi-Core Chip CORE 1 L2 CACHE 1 L2 CACHE 0 CORE 0 DRAM MEMORY CONTROLLER CORE 2 CORE 3 L2 CACHE 3 L2 CACHE 2 SHARED L3 CACHE *Die photo credit: AMD Barcelona

A DRAM Cell wordline (row enable) bitline bitline bitline bitline A DRAM cell consists of a capacitor and an access transistor It stores data in terms of charge in the capacitor A DRAM chip consists of (10s of 1000s of) rows of such cells

DRAM Refresh DRAM capacitor charge leaks over time The memory controller needs to refresh each row periodically to restore charge Activate each row every N ms Typical N = 64 ms Downsides of refresh -- Energy consumption: Each refresh consumes energy -- Performance degradation: DRAM rank/bank unavailable while refreshed -- QoS/predictability impact: (Long) pause times during refresh -- Refresh rate limits DRAM capacity scaling 22

Refresh Overhead: Performance 46% 8% Liu et al., RAIDR: Retention-Aware Intelligent DRAM Refresh, ISCA 2012. 23

Refresh Overhead: Energy 47% 15% Liu et al., RAIDR: Retention-Aware Intelligent DRAM Refresh, ISCA 2012. 24

How Do We Solve the Problem? Do we need to refresh all rows every 64ms? What if we knew what happened underneath and exposed that information to upper layers? 25

Underneath: Retention Time Profile of DRAM Liu et al., RAIDR: Retention-Aware Intelligent DRAM Refresh, ISCA 2012. 26

Taking Advantage of This Profile Expose this retention time profile information to the memory controller the operating system the programmer? the compiler? How much information to expose? Affects hardware/software overhead, power consumption, verification complexity, cost How to determine this profile information? Also, who determines it? 27

An Example: RAIDR Observation: Most DRAM rows can be refreshed much less often without losing data [Kim+, EDL 09][Liu+ ISCA 13] Key idea: Refresh rows containing weak cells more frequently, other rows less frequently 1. Profiling: Profile retention time of all rows 2. Binning: Store rows into bins by retention time in memory controller Efficient storage with Bloom Filters (only 1.25KB for 32GB memory) 3. Refreshing: Memory controller refreshes rows in different bins at different rates Results: 8-core, 32GB, SPEC, TPC-C, TPC-H 74.6% refresh reduction @ 1.25KB storage ~16%/20% DRAM dynamic/idle power reduction ~9% performance improvement Benefits increase with DRAM capacity Liu et al., RAIDR: Retention-Aware Intelligent DRAM Refresh, ISCA 2012. 28

If You Are Interested Further Readings Jamie Liu, Ben Jaiyen, Richard Veras, and Onur Mutlu, "RAIDR: Retention-Aware Intelligent DRAM Refresh" Proceedings of the 39th International Symposium on Computer Architecture (ISCA), Portland, OR, June 2012. Slides (pdf) Onur Mutlu, "Memory Scaling: A Systems Architecture Perspective" Technical talk at MemCon 2013 (MEMCON), Santa Clara, CA, August 2013. Slides (pptx) (pdf) Video 29

Takeaway Breaking the abstraction layers (between components and transformation hierarchy levels) and knowing what is underneath enables you to solve problems and design better future systems Cooperation between multiple components and layers can enable more effective solutions and systems 30

What Will You Learn Computer Architecture: The science and art of designing, selecting, and interconnecting hardware components and designing the hardware/software interface to create a computing system that meets functional, performance, energy consumption, cost, and other specific goals. Traditional definition: The term architecture is used here to describe the attributes of a system as seen by the programmer, i.e., the conceptual structure and functional behavior as distinct from the organization of the dataflow and controls, the logic design, and the physical implementation. Gene Amdahl, IBM Journal of R&D, April 1964 31

Computer Architecture in Levels of Transformation Problem Algorithm Program/Language Runtime System (VM, OS, MM) ISA (Architecture) Microarchitecture Logic Circuits Electrons Read: Patt, Requirements, Bottlenecks, and Good Fortune: Agents for Microprocessor Evolution, Proceedings of the IEEE 2001. 32

Levels of Transformation, Revisited A user-centric view: computer designed for users Problem Algorithm Program/Language User Runtime System (VM, OS, MM) ISA Microarchitecture Logic Circuits Electrons The entire stack should be optimized for user 33

Course Goals Goal 1: To familiarize those interested in computer system design with both fundamental operation principles and design tradeoffs of processor, memory, and platform architectures in today s systems. Strong emphasis on fundamentals and design tradeoffs. Goal 2: To provide the necessary background and experience to design, implement, and evaluate a modern processor by performing hands-on RTL and C-level implementation. Strong emphasis on functionality and hands-on design. Goal 3: Think critically (in solving problems), and broadly across the levels of transformation 34

Readings for Next Time Patt, Requirements, Bottlenecks, and Good Fortune: Agents for Microprocessor Evolution, Proceedings of the IEEE 2001. Mutlu and Moscibroda, Memory Performance Attacks: Denial of Memory Service in Multi-core Systems, USENIX Security Symposium 2007. P&P Chapter 1 (Fundamentals) P&H Chapters 1 and 2 (Intro, Abstractions, ISA, MIPS) Reference material throughout the course ARM Reference Manual (less so) x86 Reference Manual 35