Computing Devices Then EECS 252 Graduate Computer Architecture

Similar documents
CS654 Advanced Computer Architecture. Lec 1 - Introduction

B649 Graduate Computer Architecture. Lec 1 - Introduction

MOTIVATION. B649 Parallel Architectures and Programming

Computer Architecture

Advanced Computer Architecture

Who am I? Computing Systems Today. Computing Devices Then. EECS 252 Graduate Computer Architecture Lecture 1. Introduction January 20 th 2010

Who am I? Computing Systems Today. Computing Devices Then. EECS 252 Graduate Computer Architecture Lecture 1. Introduction January 18 th 2012

EECS 252 Graduate Computer Architecture. Lec 1 - Introduction

CS654 Advanced Computer Architecture. Lec 2 - Introduction

CS61CL Machine Structures. Lec 5 Instruction Set Architecture

Technology. Giorgio Richelli

Instruction Set Principles. (Appendix B)

Review: Moore s Law. EECS 252 Graduate Computer Architecture Lecture 2. Bell s Law new class per decade

Computer Architecture. R. Poss

Outline Marquette University

CISC 662 Graduate Computer Architecture Lecture 16 - Cache and virtual memory review

Lecture 1: Course Introduction and Overview Prof. Randy H. Katz Computer Science 252 Spring 1996

CSE 141 Computer Architecture Spring Lecture 3 Instruction Set Architecute. Course Schedule. Announcements

Computer Architecture. The Language of the Machine

Computer Systems Architecture Spring 2016

Lecture 1: Introduction

Computer Architecture. MIPS Instruction Set Architecture

Multicore and Parallel Processing

Lec 25: Parallel Processors. Announcements

ECE 475/CS 416 Computer Architecture - Introduction. Today s Agenda. Edward Suh Computer Systems Laboratory

CISC 662 Graduate Computer Architecture. Lecture 4 - ISA

Instructor Information

EN2910A: Advanced Computer Architecture Topic 02: Review of classical concepts

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

COMPUTER ORGANIZATION AND DESI

Modern Computer Architecture

ISA and RISCV. CASS 2018 Lavanya Ramapantulu

ECE 154A. Architecture. Dmitri Strukov

Computer Architecture

The Processor: Instruction-Level Parallelism

Computer Architecture Review. ICS332 - Spring 2016 Operating Systems

Introduction to Multicore architecture. Tao Zhang Oct. 21, 2010

Multiple Issue ILP Processors. Summary of discussions

Memory Hierarchy. Maurizio Palesi. Maurizio Palesi 1

Let!s go back to a course goal... Let!s go back to a course goal... Question? Lecture 22 Introduction to Memory Hierarchies

EE282 Computer Architecture. Lecture 1: What is Computer Architecture?

Performance! (1/latency)! 1000! 100! 10! Capacity Access Time Cost. CPU Registers 100s Bytes <10s ns. Cache K Bytes ns 1-0.

EEM 486: Computer Architecture. Lecture 2. MIPS Instruction Set Architecture

The MIPS Instruction Set Architecture

Computer Systems Laboratory Sungkyunkwan University

Advanced Computer Architecture. Administrative Issues

CS654 Advanced Computer Architecture. Lec 3 - Introduction

Programmable Machines

Reduced Instruction Set Computer (RISC)

CISC 662 Graduate Computer Architecture. Lecture 4 - ISA MIPS ISA. In a CPU. (vonneumann) Processor Organization

Programmable Machines

Computer Architecture. Fall Dongkun Shin, SKKU

UC Berkeley CS61C : Machine Structures

Introduction to Computer Architecture II

CPS104 Computer Organization Lecture 1

Prof. Hakim Weatherspoon CS 3410, Spring 2015 Computer Science Cornell University. P & H Chapter 4.10, 1.7, 1.8, 5.10, 6

Review of instruction set architectures

Real Processors. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

Computer Architecture

CS 194 Parallel Programming. Why Program for Parallelism?

Lecture 1: Introduction

Computer Architecture!

CPU Pipelining Issues

EEM 486: Computer Architecture

CPS104 Computer Organization Lecture 1. CPS104: Computer Organization. Meat of the Course. Robert Wagner

Lecture Topics. Announcements. Today: The MIPS ISA (P&H ) Next: continued. Milestone #1 (due 1/26) Milestone #2 (due 2/2)

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

Computer Architecture Lecture 1: Fundamentals of Quantitative Design and Analysis (Chapter 1)

Written Exam / Tentamen

Reduced Instruction Set Computer (RISC)

CSC Memory System. A. A Hierarchy and Driving Forces

UC Berkeley CS61C : Machine Structures

Computer Architecture!

How What When Why CSC3501 FALL07 CSC3501 FALL07. Louisiana State University 1- Introduction - 1. Louisiana State University 1- Introduction - 2

Administrative matters. EEL-4713C Computer Architecture Lecture 1. Overview. What is this class about?

Instruction Set Architecture part 1 (Introduction) Mehran Rezaei

Computer Systems. Binary Representation. Binary Representation. Logical Computation: Boolean Algebra

101 Assembly. ENGR 3410 Computer Architecture Mark L. Chang Fall 2009

Processor (IV) - advanced ILP. Hwansoo Han

CS61C Machine Structures. Lecture 1 Introduction. 8/27/2006 John Wawrzynek (Warzneck)

Advanced d Instruction Level Parallelism. Computer Systems Laboratory Sungkyunkwan University

CS152 Computer Architecture and Engineering Lecture 17: Cache System

CS 352H Computer Systems Architecture Exam #1 - Prof. Keckler October 11, 2007

Lecture - 4. Measurement. Dr. Soner Onder CS 4431 Michigan Technological University 9/29/2009 1

Department of Computer and IT Engineering University of Kurdistan. Computer Architecture Pipelining. By: Dr. Alireza Abdollahpouri

CS252 S05 1. Outline. CSCI 211 Computer System Architecture. Déjà vu all over again? Sea Change in Chip Design. Processor is the new transistor?

Homework 5. Start date: March 24 Due date: 11:59PM on April 10, Monday night. CSCI 402: Computer Architectures

COSC 6385 Computer Architecture. - Memory Hierarchies (I)

High Performance Computing

Implementing an Instruction Set

Computer Architecture

MIPS Instruction Set

Memory Hierarchy, Fully Associative Caches. Instructor: Nick Riasanovsky

Q1: /30 Q2: /25 Q3: /45. Total: /100

Instruction Set Architecture. "Speaking with the computer"

Computer Architecture!

ECE 588/688 Advanced Computer Architecture II

ECE 486/586. Computer Architecture. Lecture # 8

EITF20: Computer Architecture Part2.1.1: Instruction Set Architecture

CS61C : Machine Structures

Transcription:

Computing Devices Then EECS 252 Graduate Computer Architecture Lec Introduction January 2 st 29 John Kubiatowicz Electrical Engineering and Computer Sciences University of California, Berkeley http://www.eecs.berkeley.edu/~kubitron/cs252 EDSAC, University of Cambridge, UK, 949 /2/29 CS252-S9, Lecture 2 Computing Systems Today The world is a large parallel system Microprocessors in everything Vast infrastructure behind them Massive Cluster Clusters Gigabit Ethernet What is Computer Architecture? Application Sensor Nets MEMS for Sensor Nets Internet Connectivity Cars Refrigerators Scalable, Reliable, Secure Services Databases Information Collection Remote Storage Online Games Commerce Routers Robots /2/29 CS252-S9, Lecture 3 Physics Gap too large to bridge in one step (but there are exceptions, e.g. magnetic compass) In its broadest definition, computer architecture is the design of the abstraction layers that allow us to implement information processing applications efficiently using available manufacturing technologies. /2/29 CS252-S9, Lecture 4

Abstraction Layers in Modern Systems Computer Architecture s Changing Definition Original domain of the computer architect ( 5s- 8s) Application Algorithm Programming Language Operating System/Virtual Machine Instruction Set Architecture (ISA) Microarchitecture Gates/ister-Transfer Level (RTL) Circuits Devices Physics Parallel computing, security, Domain of recent computer architecture ( 9s) Reliability, power, Reinvigoration of computer architecture, mid-2s onward. 95s to 96s: Computer Architecture Course: Computer Arithmetic 97s to mid 98s: Computer Architecture Course: Instruction Set Design, especially ISA appropriate for compilers 99s: Computer Architecture Course: Design of CPU, memory system, I/O system, Multiprocessors, Networks 2s: Multi-core design, on-chip networking, parallel programming paradigms, power reduction 2s: Computer Architecture Course: Self adapting systems? Self organizing structures? DNA Systems/Quantum Computing? /2/29 CS252-S9, Lecture 5 /2/29 CS252-S9, Lecture 6 Moore s Law Cramming More Components onto Integrated Circuits Gordon Moore, Electronics, 965 # on transistors on cost-effective integrated circuit double every 8 months /2/29 CS252-S9, Lecture 7 Technology constantly on the move! Num of transistors not limiting factor Currently ~ billion transistors/chip Problems:» Too much Power, Heat, Latency» Not enough Parallelism 3-dimensional chip technology? Sandwiches of silicon Through-Vias for communication On-chip optical connections? Power savings for large packets The Intel Core i7 microprocessor ( Nehalem ) 4 cores/chip 45 nm, Hafnium hi-k dielectric 73M Transistors Shared L3 Cache - 8MB Nehalem L2 Cache - MB (256K x 4) /2/29 CS252-S9, Lecture 8

Performance (vs. VAX-/78) Crossroads: Uniprocessor Performance From Hennessy and Patterson, Computer Architecture: A Quantitative Approach, 4th edition, October, 26 25%/year 52%/year??%/year 978 98 982 984 986 988 99 992 994 996 998 2 22 24 26 VAX : 25%/year 978 to 986 RISC + x86: 52%/year 986 to 22 RISC + x86:??%/year 22 to present /2/29 CS252-S9, Lecture 9 Crossroads: Conventional Wisdom in Comp. Arch Old Conventional Wisdom: Power is free, Transistors expensive New Conventional Wisdom: Power wall Power expensive, Xtors free (Can put more on chip than can afford to turn on) Old CW: Sufficiently increasing Instruction Level Parallelism via compilers, innovation (Out-of-order, speculation, VLIW, ) New CW: ILP wall law of diminishing returns on more HW for ILP Old CW: Multiplies are slow, Memory access is fast New CW: Memory wall Memory slow, multiplies fast (2 clock cycles to DRAM memory, 4 clocks for multiply) Old CW: Uniprocessor performance 2X /.5 yrs New CW: Power Wall + ILP Wall + Memory Wall = Brick Wall Uniprocessor performance now 2X / 5(?) yrs Sea change in chip design: multiple cores (2X processors per chip / ~ 2 years)» More power efficient to use a large number of simpler processors tather than a small number of complex processors /2/29 CS252-S9, Lecture Sea Change in Chip Design Intel 44 (97): 4-bit processor, 232 transistors,.4 MHz, μm PMOS, mm 2 chip RISC II (983): 32-bit, 5 stage pipeline, 4,76 transistors, 3 MHz, 3 μm NMOS, 6 mm 2 chip 25 mm 2 chip, 65 nm CMOS = 232 RISC II+FPU+Icache+Dcache RISC II shrinks to ~.2 mm 2 at 65 nm Caches via DRAM or transistor SRAM (www.t-ram.com)? Proximity Communication via capacitive coupling at > TB/s? (Ivan Sutherland @ Sun / Berkeley) Processor is the new transistor? /2/29 CS252-S9, Lecture ManyCore Chips: The future is here! Intel 8-core multicore chip (Feb 27) 8 simple cores Two floating point engines /core Mesh-like "network-on-a-chip million transistors 65nm feature size Frequency Voltage Power Bandwidth Performance 3.6 GHz.95 V 62W.62 Terabits/s. Teraflops 5. GHz.2 V 75W 2.6 Terabits/s.63 Teraflops 5.7 GHz.35 V 265W 2.92 Terabits/s.8 Teraflops ManyCore refers to many processors/chip 64? 28? Hard to say exact boundary How to program these? Use 2 CPUs for video/audio Use for word processor, for browser 76 for virus checking??? Something new is clearly needed here /2/29 CS252-S9, Lecture 2

The End of the Uniprocessor Era Single biggest change in the history of computing systems Déjà vu all over again? Multiprocessors imminent in 97s, 8s, 9s, today s processors are nearing an impasse as technologies approach the speed of light.. David Mitchell, The Transputer: The Time Is Now (989) Transputer was premature Custom multiprocessors strove to lead uniprocessors Procrastination rewarded: 2X seq. perf. /.5 years We are dedicating all of our future product development to multicore designs. This is a sea change in computing Paul Otellini, President, Intel (24) Difference is all microprocessor companies switch to multicore (AMD, Intel, IBM, Sun; all new Apples 2-4 CPUs) Procrastination penalized: 2X sequential perf. / 5 yrs Biggest programming challenge: to 2 CPUs /2/29 CS252-S9, Lecture 3 /2/29 CS252-S9, Lecture 4 Problems with Sea Change Algorithms, Programming Languages, Compilers, Operating Systems, Architectures, Libraries, not ready to supply Thread Level Parallelism or Data Level Parallelism for CPUs / chip Need whole new approach People have been working on parallelism for over 5 years without general success Architectures not ready for CPUs / chip Unlike Instruction Level Parallelism, cannot be solved by just by computer architects and compiler writers alone, but also cannot be solved without participation of computer architects PARLab: Berkeley researchers from many backgrounds meeting since 25 to discuss parallelism Krste Asanovic, Ras Bodik, Jim Demmel, Kurt Keutzer, John Kubiatowicz, Edward Lee, George Necula, Dave Patterson, Koushik Sen, John Shalf, John Wawrzynek, Kathy Yelick, Circuit design, computer architecture, massively parallel computing, computer-aided design, embedded hardware and software, programming languages, compilers, scientific programming, and numerical analysis /2/29 CS252-S9, Lecture 5 The Instruction Set: a Critical Interface software hardware instruction set Properties of a good abstraction Lasts through many generations (portability) Used in many different ways (generality) Provides convenient functionality to higher levels Permits an efficient implementation at lower levels /2/29 CS252-S9, Lecture 6

Instruction Set Architecture... the attributes of a [computing] system as seen by the programmer, i.e. the conceptual structure and functional behavior, as distinct from the organization of the data flows and controls the logic design, and the physical implementation. Amdahl, Blaaw, and Brooks, 964 -- Organization of Programmable Storage -- Data Types & Data Structures: Encodings & Representations -- Instruction Formats -- Instruction (or Operation Code) Set SOFTWARE -- Modes of Addressing and Accessing Data Items and Instructions -- Exceptional Conditions /2/29 CS252-S9, Lecture 7 Example: MIPS R3 r Programmable storage Data types? r 2^32 x bytes Format? 3 x 32-bit GPRs (R=) r3 32 x 32-bit FP regs (paired DP) Addressing Modes? PC HI, LO, PC lo hi Arithmetic logical Add, AddU, Sub, SubU, And, Or, Xor, Nor, SLT, SLTU, AddI, AddIU, SLTI, SLTIU, AndI, OrI, XorI, LUI SLL, SRL, SRA, SLLV, SRLV, SRAV Memory Access LB, LBU, LH, LHU, LW, LWL,LWR SB, SH, SW, SWL, SWR Control 32-bit instructions on word boundary J, JAL, JR, JALR BEq, BNE, BLEZ,BGTZ,BLTZ,BGEZ,BLTZAL,BGEZAL /2/29 CS252-S9, Lecture 8 ISA vs. Computer Architecture Old definition of computer architecture = instruction set design Other aspects of computer design called implementation Insinuates implementation is uninteresting or less challenging Our view is computer architecture >> ISA Architect s job much more than instruction set design; technical hurdles today more challenging than those in instruction set design Since instruction set design not where action is, some conclude computer architecture (using old definition) is not where action is We disagree on conclusion Agree that ISA not where action is (ISA in CA:AQA 4/e appendix) Computer Architecture is an Integrated Approach What really matters is the functioning of the complete system hardware, runtime system, compiler, operating system, and application In networking, this is called the End to End argument Computer architecture is not just about transistors, individual instructions, or particular implementations E.g., Original RISC projects replaced complex instructions with a compiler + simple instructions It is very important to think across all hardware/software boundaries New technology New Capabilities New Architectures New Tradeoffs Delicate balance between backward compatibility and efficiency /2/29 CS252-S9, Lecture 9 /2/29 CS252-S9, Lecture 2

Computer Architecture is Design and Analysis Analysis Design Architecture is an iterative process: Searching the space of possible designs At all levels of computer systems The processor you built in CS52 CS252 Executive Summary What you ll understand after taking CS252 Creativity Cost / Performance Analysis Good Ideas Mediocre Ideas Bad Ideas /2/29 CS252-S9, Lecture 2 Also, the technology behind chip-scale multiprocessors /2/29 CS252-S9, Lecture 22 Computer Architecture Topics Input/Output and Storage Memory Hierarchy VLSI Disks, WORM, Tape DRAM L2 Cache L Cache Instruction Set Architecture Emerging Technologies Interleaving Bus protocols Coherence, Bandwidth, Latency RAID Network Communication Addressing, Protection, Exception Handling Pipelining, Hazard Resolution, Pipelining and Instruction Superscalar, Reordering, Level Parallelism Prediction, Speculation, Vector, Dynamic Compilation /2/29 CS252-S9, Lecture 23 Other Processors Computer Architecture Topics P M S P M M Interconnection Network Processor-Memory-Switch Multiprocessors Networks and Interconnections P /2/29 CS252-S9, Lecture 24 P M Shared Memory, Message Passing, Data Parallelism Network Interfaces Topologies, Routing, Bandwidth, Latency, Reliability

Tentative Topics Coverage Textbook: Hennessy and Patterson, Computer Architecture: A Quantitative Approach, 4 th Ed., 26 Research Papers -- Handed out in class.5 weeks Review: Fundamentals of Computer Architecture, Instruction Set Architecture, Pipelining 2.5 weeks: Pipelining, Interrupts, and Instructional Level Parallelism, Vector Processors week: Memory Hierarchy.5 weeks: Networks and Interconnection Technology week: Parallel Models of Computation week: Message-Passing Interfaces week: Shared Memory Hardware.5 weeks: Multithreading, Latency Tolerance, GPU.5 weeks: Fault Tolerance, Input/Output and Storage.5 weeks: Quantum Computing, DNA Computing /2/29 CS252-S9, Lecture 25 CS252: Information Instructor:Prof John D. Kubiatowicz Office: 673 Soda Hall, 643-687 kubitron@cs Office Hours: Mon 2:3-4: or by appt. T. A: Victor Wen (vwen@cs) Class: Mon/Wed,:-2:3pm, 3 Soda Hall Text: Computer Architecture: A Quantitative Approach, Fourth Edition (24) Web page: http://www.cs/~kubitron/cs252/ Lectures available online <:3AM day of lecture Newsgroup: ucb.class.cs252 Email: cs252@kubi.cs.berkeley.edu /2/29 CS252-S9, Lecture 26 Lecture style -Minute Review 2-Minute Lecture/Discussion 5- Minute Administrative Matters 25-Minute Lecture/Discussion 5-Minute Break (water, stretch) 25-Minute Lecture/Discussion Instructor will come to class early & stay after to answer questions Research Paper Reading As graduate students, you are now researchers. Most information of importance to you will be in research papers Ability to scan and understand research papers is key to success So: you will read lots of papers in this course! Quick paragraph summaries will be due in class Important supplement to book Will discuss some of the papers in class Papers will be scanned and on web page Will be available (hopefully) > week in advance Attention 2 min. Break In Conclusion,... Time /2/29 CS252-S9, Lecture 27 /2/29 CS252-S9, Lecture 28

Quizzes Reduce the pressure of taking quizes Two Graded Quizes: Tentative: Wed March 8 th and Wed May 6 th Our goal: test knowledge vs. speed writing 3 hrs to take.5-hr test (5:3-8:3 PM, TBA location) Both mid-term quizzes can bring summary sheet» Transfer ideas from book to paper Last chance Q&A: during class time day of exam Students/Staff meet over free pizza/drinks at La Vals: Wed March 8 th (8:3 PM) and Wed May 6 th (8:3 PM) /2/29 CS252-S9, Lecture 29 Research Project Research-oriented course Project provides opportunity to do research in the small to help make transition from good student to research colleague Assumption is that you will advance the state of the art in some way Projects done in groups of 2 or 3 students Topic? Should be topical to CS252 Exciting possibilities related to the ParLAB research agenda Details: meet 3 times with faculty/ta to see progress give oral presentation give poster session (possibly) written report like conference paper Can you share a project with other systems projects? Under most circumstances, the answer is yes Need to ok with me, however /2/29 CS252-S9, Lecture 3 More Course Info Grading: % Class Participation % Reading Writups 4% Examinations (2 Midterms) 4% Research Project (work in pairs) Schedule: 2 Graded Quizes: Wed March 8 th and Wed May 6 th President s Day: February 6 th Spring Break: Monday March 23 rd to March 3 th 252 Last lecture: Monday, May th Oral Presentations: Wednesday May 3 th? 252 Poster Session:??? Project Papers/URLs due: Monday May 8 th Project Suggestions: TBA /2/29 CS252-S9, Lecture 3 Coping with CS 252 Undergrads must have taken CS52 Grad Students with too varied background? In past, CS grad students took written prelim exams on undergraduate material in hardware, software, and theory st 5 weeks reviewed background, helped 252, 262, 27 Prelims were dropped => some unprepared for CS 252? Grads without CS52 equivalent may have to work hard; Review: Appendix A, B, C; CS 52 home page, maybe Computer Organization and Design (COD) 3/e Chapters to 8 of COD if never took prerequisite If took a class, be sure COD Chapters 2, 6, 7 are familiar I can loan you a copy Will spend 2 lectures on review of Pipelining and Memory Hierarchy /2/29 CS252-S9, Lecture 32

Building Hardware that Computes Finite State Machines: System state is explicit in representation Transitions between states represented as arrows with inputs on arcs. Output may be either part of state or on arcs Mod 3 Machine 6 Mod 3 Input (MSB first) 2 2 Alpha/ Delta/ 2 Beta/ /2/29 CS252-S9, Lecture 33 /2/29 CS252-S9, Lecture 34 Implementation as Comb logic + Latch Combinational Logic Latch Mealey Machine Moore Machine / Alpha/ / / / / Delta/ 2 Beta/ / Input State old State new Div /2/29 CS252-S9, Lecture 35 Microprogrammed Controllers State machine in which part of state is a micro-pc. Explicit circuitry for incrementing or changing PC Includes a ROM with microinstructions. Controlled logic implements at least branches and jumps State w/ Address Next Address Addr + ROM (Instructions) MUX Control Branch PC Combinational Logic/ Controlled Machine Instruction Branch : forw 35 xxx : b_no_obstacles 2: back xxx 3: rotate 9 xxx 4: goto /2/29 CS252-S9, Lecture 36

Fundamental Execution Cycle What s a Clock Cycle? Instruction Fetch Instruction Decode Obtain instruction from program storage Determine required actions and instruction size Processor regs Memory program Latch or register combinational logic Operand Fetch Locate and obtain operand data F.U.s Data Execute Result Store Next Instruction Compute result value or status Deposit results in storage for later use Determine successor instruction von Neuman bottleneck Old days: levels of gates Today: determined by numerous time-of-flight issues + gate delays clock propagation, wire lengths, drivers /2/29 CS252-S9, Lecture 37 /2/29 CS252-S9, Lecture 38 Pipelined Instruction Execution Limits to pipelining I n s t r. O r d e r Time (clock cycles) Cycle Cycle 2 Cycle 3 Cycle 4 Cycle 5 Cycle 6 Cycle 7 Ifetch Ifetch ALU Ifetch DMem ALU Ifetch DMem ALU DMem ALU DMem Maintain the von Neumann illusion of one instruction at a time execution Hazards prevent next instruction from executing during its designated clock cycle Structural hazards: attempt to use the same hardware to do two different things at once Data hazards: Instruction depends on result of prior instruction still in the pipeline Control hazards: Caused by delay between the fetching of instructions and decisions about changes in control flow (branches and jumps). Power: Too many thing happening at once Melt your chip! Must disable parts of the system that are not being used Clock Gating, Asynchronous Design, Low Voltage Swings, /2/29 CS252-S9, Lecture 39 /2/29 CS252-S9, Lecture 4

Progression of ILP st generation RISC - pipelined Full 32-bit processor fit on a chip => issue almost IPC» Need to access memory +x times per cycle Floating-Point unit on another chip Cache controller a third, off-chip cache board per processor multiprocessor systems 2 nd generation: superscalar Processor and floating point unit on chip (and some cache) Issuing only one instruction per cycle uses at most half Fetch multiple instructions, issue couple» Grows from 2 to 4 to 8 How to manage dependencies among all these instructions? Where does the parallelism come from? VLIW Expose some of the ILP to compiler, allow it to schedule instructions to reduce dependences Modern ILP Dynamically scheduled, out-of-order execution Current microprocessor 6-8 of instructions per cycle Pipelines are s of cycles deep many simultaneous instructions in execution at once Unfortunately, hazards cause discarding of much work What happens: Grab a bunch of instructions, determine all their dependences, eliminate dep s wherever possible, throw them all into the execution unit, let each one move forward as its dependences are resolved Appears as if executed sequentially On a trap or interrupt, capture the state of the machine between instructions perfectly Huge complexity Complexity of many components scales as n 2 (issue width) Power consumption big problem /2/29 CS252-S9, Lecture 4 /2/29 CS252-S9, Lecture 42 When all else fails - guess Programs make decisions as they go Conditionals, loops, calls Translate into branches and jumps ( of 5 instructions) How do you determine what instructions for fetch when the ones before it haven t executed? Branch prediction Lot s of clever machine structures to predict future based on history Machinery to back out of mis-predictions Execute all the possible branches Likely to hit additional branches, perform stores speculative threads What can hardware do to make programming (with performance) easier? Have we reached the end of ILP? Multiple processor easily fit on a chip Every major microprocessor vendor has gone to multithreading Thread: loci of control, execution context Fetch instructions from multiple threads at once, throw them all into the execution unit Intel: hyperthreading, Sun: Concept has existed in high performance computing for 2 years (or is it 4? CDC66) Vector processing Each instruction processes many distinct data Ex: MMX Raise the level of architecture many processors per chip Tensilica Configurable Proc /2/29 CS252-S9, Lecture 43 /2/29 CS252-S9, Lecture 44

The Memory Abstraction Processor-DRAM Memory Gap (latency) Association of <name, value> pairs typically named as byte addresses often values aligned on multiples of size Sequence of Reads and Writes Write binds a value to an address Read of addr returns most recently written value bound to that address command (R/W) address (name) data (W) data (R) done /2/29 CS252-S9, Lecture 45 Performance 98 98 982 983 984 985 986 987 988 989 99 99 992 993 994 995 996 997 998 999 2 Time µproc CPU 6%/yr. (2X/.5yr Processor-Memory ) Performance Gap: (grows 5% / year) DRAM 9%/yr. (2X/ yrs) /2/29 CS252-S9, Lecture 46 DRAM Capacity Access Time Cost CPU isters s Bytes << s ns Cache s-s K Bytes ~ ns $s/ MByte Main Memory M Bytes ns- 3ns $< / MByte Levels of the Memory Hierarchy isters Cache Memory Instr. Operands Blocks Pages Staging Xfer Unit prog./compiler -8 bytes cache cntl 8-28 bytes OS 52-4K bytes Upper Level faster Disk s G Bytes, ms (,, ns) Disk $./ MByte user/operator Files Mbytes Larger Tape infinite Tape Lower Level sec-min $.4/ MByte circa 995 numbers /2/29 CS252-S9, Lecture 47 The Principle of Locality The Principle of Locality: Program access a relatively small portion of the address space at any instant of time. Two Different Types of Locality: Temporal Locality (Locality in Time): If an item is referenced, it will tend to be referenced again soon (e.g., loops, reuse) Spatial Locality (Locality in Space): If an item is referenced, items whose addresses are close by tend to be referenced soon (e.g., straightline code, array access) Last 3 years, HW relied on locality for speed P $ MEM /2/29 CS252-S9, Lecture 48

Is it all about memory system design? Modern microprocessors are almost all cache Memory Abstraction and Parallelism Maintaining the illusion of sequential access to memory across distributed system What happens when multiple processors access the same memory at once? Do they see a consistent picture? P P n P P n $ Interconnection network $ Mem $ Mem $ Mem Mem Interconnection network Processing and processors embedded in the memory? /2/29 CS252-S9, Lecture 49 /2/29 CS252-S9, Lecture 5 Is it all about communication? Proc Caches Memory I/O Devices: Busses Pentium IV Chipset Controllers Disks Displays Keyboards adapters Networks Breaking the HW/Software Boundary Moore s law (more and more trans) is all about volume and regularity What if you could pour nano-acres of unspecific digital logic stuff onto silicon Do anything with it. Very regular, large volume Field Programmable Gate Arrays Chip is covered with logic blocks w/ FFs, RAM blocks, and interconnect All three are programmable by setting configuration bits These are huge? Can each program have its own instruction set? Do we compile the program entirely into hardware? /2/29 CS252-S9, Lecture 5 /2/29 CS252-S9, Lecture 52

log (people per computer) Bell s Law new class per decade Enabled by technological opportunities year Smaller, more numerous and more intimately connected Brings in a new kind of application Number Crunching Data Storage productivity interactive streaming information to/from physical world Used in many ways not previously imagined /2/29 CS252-S9, Lecture 53 It s not just about bigger and faster! Complete computing systems can be tiny and cheap System on a chip Resource efficiency Real-estate, power, pins, /2/29 CS252-S9, Lecture 54 Focus on the Common Case Quantifying the Design Process Common sense guides computer design Since its engineering, common sense is valuable In making a design trade-off, favor the frequent case over the infrequent case E.g., Instruction fetch and decode unit used more frequently than multiplier, so optimize it st E.g., If database server has 5 disks / processor, storage dependability dominates system dependability, so optimize it st Frequent case is often simpler and can be done faster than the infrequent case E.g., overflow is rare when adding 2 numbers, so improve performance by optimizing more common case of no overflow May slow down overflow, but overall performance improved by optimizing for the normal case What is frequent case and how much performance improved by making case faster => Amdahl s Law /2/29 CS252-S9, Lecture 55 /2/29 CS252-S9, Lecture 56

Processor performance equation CPI Amdahl s Law inst count Cycle time CPU CPU time time = Seconds = Instructions x Cycles Cycles x Seconds Program Program Instruction Cycle Cycle Inst Count CPI Clock Rate Program X Compiler X (X) Inst. Set. X X Organization X X ExTimenew = ExTimeold Speedup overall ExTime = ExTime old new = ( Fraction ) Best you could ever hope to do: Speedup = Fraction ( Fraction ) + maximum - Fraction ( ) Fraction + Speedup Speedup Technology X /2/29 CS252-S9, Lecture 57 /2/29 CS252-S9, Lecture 58 Amdahl s Law example New CPU X faster I/O bound server, so 6% time waiting for I/O Speedup overall = = ( Fraction ) (.4) +.4 = Fraction + Speedup.64 =.56 Apparently, its human nature to be attracted by X faster, vs. keeping in perspective its just.6x faster And in conclusion Computer Architecture >> instruction sets Computer Architecture skill sets are different Quantitative approach to design Solid interfaces that really work Technology tracking and anticipation CS 252 to learn new skills, transition to research Computer Science at the crossroads from sequential to parallel computing Salvation requires innovation in many fields, including computer architecture Read Appendix A, B, C of your book Next time: quick summary of everything you need to know to take this class /2/29 CS252-S9, Lecture 59 /2/29 CS252-S9, Lecture 6