The Alpha Microprocessor: Out-of-Order Execution at 600 MHz. Some Highlights

Similar documents
The Alpha Microprocessor: Out-of-Order Execution at 600 Mhz. R. E. Kessler COMPAQ Computer Corporation Shrewsbury, MA

Jim Keller. Digital Equipment Corp. Hudson MA

The Alpha Microprocessor Architecture. Compaq Computer Corporation 334 South St., Shrewsbury, MA

HP PA-8000 RISC CPU. A High Performance Out-of-Order Processor

Exploring Alpha Power for Technical Computing

Itanium 2 Processor Microarchitecture Overview

The Alpha and Microprocessors:

CS 152, Spring 2011 Section 8

November 7, 2014 Prediction

Advanced cache optimizations. ECE 154B Dmitri Strukov

CS 152 Computer Architecture and Engineering. Lecture 12 - Advanced Out-of-Order Superscalars

Module 5: "MIPS R10000: A Case Study" Lecture 9: "MIPS R10000: A Case Study" MIPS R A case study in modern microarchitecture.

The Alpha 21264: A 500 MHz Out-of-Order Execution Microprocessor

OOO Execution and 21264

EECS 322 Computer Architecture Superpipline and the Cache

Memory Hierarchies 2009 DAT105

Computer Systems Architecture I. CSE 560M Lecture 10 Prof. Patrick Crowley

Lecture 12 Branch Prediction and Advanced Out-of-Order Superscalars

Intel released new technology call P6P

Digital Leads the Pack with 21164

Instruction Level Parallelism

6x86 PROCESSOR Superscalar, Superpipelined, Sixth-generation, x86 Compatible CPU

CS 252 Graduate Computer Architecture. Lecture 4: Instruction-Level Parallelism

Next Generation Technology from Intel Intel Pentium 4 Processor

Techniques for Mitigating Memory Latency Effects in the PA-8500 Processor. David Johnson Systems Technology Division Hewlett-Packard Company

CS252 Spring 2017 Graduate Computer Architecture. Lecture 8: Advanced Out-of-Order Superscalar Designs Part II

Topics to be covered. EEC 581 Computer Architecture. Virtual Memory. Memory Hierarchy Design (II)

Advanced d Processor Architecture. Computer Systems Laboratory Sungkyunkwan University

Lecture 5: Instruction Pipelining. Pipeline hazards. Sequential execution of an N-stage task: N Task 2

TDT Coarse-Grained Multithreading. Review on ILP. Multi-threaded execution. Contents. Fine-Grained Multithreading

1. PowerPC 970MP Overview

Chapter-5 Memory Hierarchy Design

Mainstream Computer System Components CPU Core 2 GHz GHz 4-way Superscaler (RISC or RISC-core (x86): Dynamic scheduling, Hardware speculation

COSC 6385 Computer Architecture - Memory Hierarchies (II)

Mainstream Computer System Components

CS 152, Spring 2012 Section 8

EITF20: Computer Architecture Part4.1.1: Cache - 2

Lecture 9: More ILP. Today: limits of ILP, case studies, boosting ILP (Sections )

CS650 Computer Architecture. Lecture 9 Memory Hierarchy - Main Memory

MICROPROCESSOR. R. E. Kessler Compaq Computer Corporation

Lecture 16: Core Design. Today: basics of implementing a correct ooo core: register renaming, commit, LSQ, issue queue

Dynamic Memory Dependence Predication

250P: Computer Systems Architecture. Lecture 9: Out-of-order execution (continued) Anton Burtsev February, 2019

4.1 Introduction 4.3 Datapath 4.4 Control 4.5 Pipeline overview 4.6 Pipeline control * 4.7 Data hazard & forwarding * 4.

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

PowerPC 740 and 750

Advanced Processor Architecture. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

EECS 470 Final Exam Fall 2013

Cache Memory COE 403. Computer Architecture Prof. Muhamed Mudawar. Computer Engineering Department King Fahd University of Petroleum and Minerals

The Pentium II/III Processor Compiler on a Chip

EITF20: Computer Architecture Part4.1.1: Cache - 2

Lecture 8: Branch Prediction, Dynamic ILP. Topics: static speculation and branch prediction (Sections )

Performance of Computer Systems. CSE 586 Computer Architecture. Review. ISA s (RISC, CISC, EPIC) Basic Pipeline Model.

EE382A Lecture 7: Dynamic Scheduling. Department of Electrical Engineering Stanford University

Digital Sets New Standard

Keywords and Review Questions

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

CS 152 Computer Architecture and Engineering. Lecture 8 - Memory Hierarchy-III

Memory Hierarchy Computing Systems & Performance MSc Informatics Eng. Memory Hierarchy (most slides are borrowed)

Advanced Processor Architecture

Advanced d Instruction Level Parallelism. Computer Systems Laboratory Sungkyunkwan University

Advanced Caching Techniques (2) Department of Electrical Engineering Stanford University

Computer System Components

Reduction of Control Hazards (Branch) Stalls with Dynamic Branch Prediction

Branch statistics. 66% forward (i.e., slightly over 50% of total branches). Most often Not Taken 33% backward. Almost all Taken

Architectural Performance. Superscalar Processing. 740 October 31, i486 Pipeline. Pipeline Stage Details. Page 1

CISC 662 Graduate Computer Architecture Lecture 11 - Hardware Speculation Branch Predictions

Dynamic Control Hazard Avoidance

Page 1. Multilevel Memories (Improving performance using a little cash )

COSC 6385 Computer Architecture - Memory Hierarchy Design (III)

Issue Logic for a 600-MHz Out-of-Order Execution Microprocessor

UltraSparc-3 Aims at MP Servers

Multilevel Memories. Joel Emer Computer Science and Artificial Intelligence Laboratory Massachusetts Institute of Technology

Page 1. Recall from Pipelining Review. Lecture 15: Instruction Level Parallelism and Dynamic Execution

Memory latency: Affects cache miss penalty. Measured by:

Lecture 11: SMT and Caching Basics. Today: SMT, cache access basics (Sections 3.5, 5.1)

Memory latency: Affects cache miss penalty. Measured by:

Why memory hierarchy? Memory hierarchy. Memory hierarchy goals. CS2410: Computer Architecture. L1 cache design. Sangyeun Cho

Main Memory. EECC551 - Shaaban. Memory latency: Affects cache miss penalty. Measured by:

Design Objectives of the 0.35µm Alpha Microprocessor (A 500MHz Quad Issue RISC Microprocessor)

CS 152 Computer Architecture and Engineering. Lecture 7 - Memory Hierarchy-II

Branch Prediction & Speculative Execution. Branch Penalties in Modern Pipelines

Complex Pipelining: Out-of-order Execution & Register Renaming. Multiple Function Units

E0-243: Computer Architecture

Digital Semiconductor Alpha Microprocessor Product Brief

Superscalar Processors

Lecture: Out-of-order Processors

Lec 11 How to improve cache performance

ILP concepts (2.1) Basic compiler techniques (2.2) Reducing branch costs with prediction (2.3) Dynamic scheduling (2.4 and 2.5)

Page 1. Review: Dynamic Branch Prediction. Lecture 18: ILP and Dynamic Execution #3: Examples (Pentium III, Pentium 4, IBM AS/400)

Computer Architecture. Introduction. Lynn Choi Korea University

Page 1. Recall from Pipelining Review. Lecture 16: Instruction Level Parallelism and Dynamic Execution #1: Ideas to Reduce Stalls

Handout 4 Memory Hierarchy

High Performance SMIPS Processor

Outline. 1 Reiteration. 2 Cache performance optimization. 3 Bandwidth increase. 4 Reduce hit time. 5 Reduce miss penalty. 6 Reduce miss rate

Lecture: SMT, Cache Hierarchies. Topics: SMT processors, cache access basics and innovations (Sections B.1-B.3, 2.1)

1. Microprocessor Architectures. 1.1 Intel 1.2 Motorola

CS152 Computer Architecture and Engineering. Complex Pipelines, Out-of-Order Execution, and Speculation Problem Set #3 Due March 12

Sam Naffziger. Gary Hammond. Next Generation Itanium Processor Overview. Lead Circuit Architect Microprocessor Technology Lab HP Corporation

Processor: Superscalars Dynamic Scheduling

Transcription:

The Alpha 21264 Microprocessor: Out-of-Order ution at 600 MHz R. E. Kessler Compaq Computer Corporation Shrewsbury, MA 1 Some Highlights Continued Alpha performance leadership 600 MHz operation in 0.35u CMOS6, 6 metal layers, 2.2V 15 Million transistors, 3.1 cm 2, 587 pin PGA Specint95 of 30+ and Specfp95 of 50+ Out-of-order and speculative execution 4-way integer issue 2-way floating-point issue Sophisticated tournament branch prediction High-bandwidth memory system (1+ GB/sec) 2 Page 1 1

Alpha 21264: Block Diagram FETCH MAP QUEUE REG EXEC DCACHE Stage: 0 1 2 3 4 5 6 Branch Predictors Next-Line L1 Ins. (20) 80 in-flight instructions plus 32 loads and 32 stores 4 Instructions / cycle (15) (72) ADD Div/Sqrt MUL L1 Data Victim Buffer Miss Bus erface Sys Bus 64-bit Bus 128-bit Phys 44-bit 3 Alpha 21264: Block Diagram FETCH MAP QUEUE REG EXEC DCACHE Stage: 0 1 2 3 4 5 6 Branch Predictors Next-Line L1 Ins. (20) 80 in-flight instructions plus 32 loads and 32 stores 4 Instructions / cycle (15) (72) ADD Div/Sqrt MUL L1 Data Victim Buffer Miss Bus erface Sys Bus 64-bit Bus 128-bit Phys 44-bit 4 Page 2 2

21264 Instruction Fetch Bandwidth Enablers The 64 KB two-way associative instruction cache supplies four instructions every cycle The next-fetch and set predictors provide the fast cache access times of a direct-mapped cache and eliminate bubbles in nonsequential control flows The instruction fetcher speculates through up to 20 branch predictions to supply a continuous stream of instructions The tournament branch predictor dynamically selects between Local and Global history to minimize mispredicts 5 Instruction Stream Improvements Instr Decode, Branch Pred PC... PC GEN Learns Dynamic Jumps 0-cycle Penalty On Branches Set-Associative Behavior Tag S0 Tag S1 Icache Data Next Fetch Set Pred Cmp Cmp Hit/Miss/Set Miss Instructions (4) Next address + set 6 Page 3 3

Fetch Stage: Branch Prediction Some branch directions can be predicted based on their past behavior: Local Correlation Others can be predicted based on how the program arrived at the branch: Global Correlation if (a%2 == 0) TNTN if (a == 0) (predict taken) if (b == 0) NNNN if (b == 0) (predict taken) if (a == b) (predict taken) 7 Tournament Branch Prediction Local History (1024x10) Local Prediction (1024x3) Global Prediction (4096x2) Program Counter Choice Prediction (4096x2) Global History Prediction 8 Page 4 4

Alpha 21264: Block Diagram FETCH MAP QUEUE REG EXEC DCACHE Stage: 0 1 2 3 4 5 6 Branch Predictors Next-Line L1 Ins. (20) 80 in-flight instructions plus 32 loads and 32 stores 4 Instructions / cycle (15) (72) ADD Div/Sqrt MUL L1 Data Victim Buffer Miss Bus erface Sys Bus 64-bit Bus 128-bit Phys 44-bit 9 per and Stages per: Rename 4 instructions per cycle (8 source / 4 dest ) 80 integer + 72 floating-point physical registers Stage: eger: 20 entries / Quad- Floating Point: 15 entries / Dual- Instructions issued out-of-order when data ready Prioritized from oldest to youngest each cycle Instructions leave the queues after they issue The queue collapses every cycle as needed 10 Page 5 5

and Stages MAPPER ISSUE QUEUE Saved State Arbiter Request Grant ister Numbers MAP CAMs Renamed isters ister Scoreboard Instructions 80 Entries 4 Inst / Cycle QUEUE 20 Entries d Instructions 11 Alpha 21264: Block Diagram FETCH MAP QUEUE REG EXEC DCACHE Stage: 0 1 2 3 4 5 6 Branch Predictors Next-Line L1 Ins. (20) 80 in-flight instructions plus 32 loads and 32 stores 4 Instructions / cycle (15) (72) ADD Div/Sqrt MUL L1 Data Victim Buffer Miss Bus erface Sys Bus 64-bit Bus 128-bit Phys 44-bit 12 Page 6 6

ister and ute Stages Floating Point ution s Mul Add Div SQRT m. video shift/br add / logic add / logic / memory eger ution s Cluster 0 Cluster 1 mul shift/br add / logic add / logic / memory 13 eger Cross-Cluster Instruction Scheduling and ution eger ution s Instructions are statically pre-slotted to the upper or lower execution pipes The issue queue dynamically selects between the left and right clusters This has most of the performance of 4-way with the simplicity of 2-way issue m. video shift/br add / logic Cluster 0 Cluster 1 add / logic / memory mul shift/br add / logic add / logic / memory 14 Page 7 7

Alpha 21264: Block Diagram FETCH MAP QUEUE REG EXEC DCACHE Stage: 0 1 2 3 4 5 6 Branch Predictors Next-Line L1 Ins. (20) 80 in-flight instructions plus 32 loads and 32 stores 4 Instructions / cycle (15) (72) ADD Div/Sqrt MUL L1 Data Victim Buffer Miss Bus erface Sys Bus 64-bit Bus 128-bit Phys 44-bit 15 21264 On-Chip Memory System Features Two loads/stores per cycle Any combination 64 KB two-way associative L1 data cache (9.6 GB/sec) Phase-pipelined at > 1 GHz (no bank conflicts!) 3 cycle latency (issue to issue of consumer) with hit prediction Out-of-order and speculative execution Minimizes effective memory latency 32 outstanding loads and 32 outstanding stores Maximizes memory system parallelism Speculative stores forward data to subsequent loads 16 Page 8 8

Memory System (Continued) 64L2 @ 1.5x System @ 1.5x Bus erface Data Path Fill Data Cluster 0 Memory Data Busses Cluster 1 Memory Speculative Store Data 128 64 64 Path New references check addresses against existing references Speculative store data that address matches bypasses to loads Multiple misses to the same cache block merge 128 Double-Pumped GHz Data Hazard detection logic manages out-of-order references 128 Victim Data 17 Low-latency Speculative of eger Load Data Consumers (Predict Hit) LDQ R0, 0(R0) ADDQ R0, R0, R1 OP R2, R2, R3 LDQ Lookup Cycle Q R E D Q R E D Q R E D 3 Cycle Latency Q R E D ADDQ Cycle Squash this ADDQ on a miss This op also gets squashed Restart ops here on a miss When predicting a load hit: The ADDQ issues (speculatively) after 3 cycles Best performance if the load actually hits (matching the prediction) The ADDQ issues before the load hit/miss calculation is known If the LDQ misses when predicted to hit: Squash two cycles (replay the ADDQ and its consumers) Force a mini-replay (direct from the issue queue) 18 Page 9 9

Low-latency Speculative of eger Load Data Consumers (Predict Miss) LDQ R0, 0(R0) OP1 R2, R2, R3 OP2 R4, R4, R5 ADDQ R0, R0, R1 LDQ Lookup Cycle Q R E D Q R E D Q R E D Q R E D ADDQ can issue 5 Cycle Latency (Min) Earliest ADDQ When predicting a load miss: The minimum load latency is 5 cycles (more on a miss) There are no squashes Best performance if the load actually misses (as predicted) The hit/miss predictor: MSB of 4-bit counter (hits increment by 1, misses decrement by 2) 19 Dynamic Hazard Avoidance (Before Marking) Program order (Assume R28 == R29): LDQ R0, 64(R28) STQ R0, 0(R28) Store followed by load to the LDQ R1, 0(R29) same memory address! First execution order: LDQ R0, 64(R28) LDQ R1, 0(R29) STQ R0, 0(R28) This (re-ordered) load got the wrong data value! 20 Page 10 10

Dynamic Hazard Avoidance (After Marking/Training) This load is marked this time Program order (Assume R28 == R29): LDQ R0, 64(R28) STQ R0, 0(R28) Store followed by load to the LDQ R1, 0(R29) same memory address! Subsequent executions (after marking): LDQ R0, 64(R28) STQ R0, 0(R28) LDQ R1, 0(R29) The marked (delayed) load gets the bypassed store data! 21 New Memory Prefetches Software-directed prefetches Prefetch Prefetch w/ modify intent Prefetch, evict it next Evict Block (eject data from the cache) Write hint allocate block with no data read useful for full cache block writes 22 Page 11 11

21264 Off-Chip Memory System Features 8 outstanding block fills + 8 victims Split L2 cache and system busses (back-side cache) High-speed (bandwidth) point-to-point channels clock-forwarding technology, low pin counts L2 hit load latency (load issue to consumer issue) = 6 cycles + SRAM latency Max L2 cache bandwidth of 16 bytes per 1.5 cycles 6.4 GB/sec with a 400Mhz transfer rate L2 miss load latency can be 160ns (60 ns DRAM) Max system bandwidth of 8 bytes per 1.5 cycles 3.2 GB/sec with a 400Mhz transfer rate 23 21264 Pin Bus System Pin Bus L2 Port System Chip-set (DRAM and I/O) Out In System Data 64b Alpha 21264 TAG Data 128b L2 TAG RAMs L2 Data RAMs Independent data busses allow simultaneous data transfers L2 : 0-16MB Synch Direct ped Peak Bandwidth: Example 1: - 133 MHz Burst RAM 2.1 GB/Sec Example 2: RISC 200+ MHz Late Write 3.2+ GB/Sec Example 3: Dual Data 400+ MHz 6.4+ GB/Sec 24 Page 12 12

Compaq Alpha / AMD Shared System Pin Bus (External erface) System Pin Bus ( Slot A ) System Chip-set (DRAM and I/O) Out In System Data 64b AMD K7 High-performance system pin bus Shared system chipset designs This is a win-win! 25 Dual 21264 System Example 21264 & L2 64b 8 Data Switches 256b 256b Memory Bank 64b 21264 & L2 Control Chip 100MHz SDRAM Memory Bank PCI-chip PCI-chip 64b PCI 64b PCI 100MHz SDRAM 26 Page 13 13

Measured Performance: SPEC95 Spec95 35 30 30 25 20 15 18.8 17.4 16.5 16.1 14.7 14.4 10 5 0 21264-600 21164-600 PA8200 - Pentium II - UltraSparc R10000 - PowerPC 240 400 II - 360 250 604e - 332 Spec95 60 50 50 40 30 29.2 28.5 26.6 24.5 23.5 20 13.7 12.6 10 0 21264-21164 - PA8200 - POWER2 R10000 - UltraSparc Pentium II PowerPC 600 600 240-160 250 II - 360-400 604e - 332 27 Measured Performance: STREAMS Max Streams (MB/sec) 1200 1000 800 1000 883 600 400 200 367 358 292 213 0 21264-600 RS6000-397 UltraSparc II - R10000-250 UE6000 (ass.) - O2000 21164-533 - 164SX Pentium II - 300 - Dell 28 Page 14 14

Alpha 21264 Floating-point Floating per and Instruction Data path Bus erface Memory Controller eger (Left) eger per eger Data and Control Busses Memory Controller eger (right) Instruction BIU Data 29 Summary The 21264 will maintain Alpha s performance lead 30+ Specint95 and 50+ Specfp95 1+ GB/sec memory bandwidth The 21264 proves that both high frequency and sophisticated architectural features can coexist high-bandwidth speculative instruction fetch out-of-order execution 6-way instruction issue highly parallel out-of-order memory system 30 Page 15 15