COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 4. The Processor

Similar documents
Chapter 4. Instruction Execution. Introduction. CPU Overview. Multiplexers. Chapter 4 The Processor 1. The Processor.

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 4. The Processor

Chapter 4. The Processor

Chapter 4. The Processor

Chapter 4. The Processor

Chapter 4. The Processor

COMPUTER ORGANIZATION AND DESIGN

The Processor. Z. Jerry Shi Department of Computer Science and Engineering University of Connecticut. CSE3666: Introduction to Computer Architecture

Processor (I) - datapath & control. Hwansoo Han

The Processor: Datapath and Control. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Chapter 4. The Processor. Instruction count Determined by ISA and compiler. We will examine two MIPS implementations

3/12/2014. Single Cycle (Review) CSE 2021: Computer Organization. Single Cycle with Jump. Multi-Cycle Implementation. Why Multi-Cycle?

Pipelining. CSC Friday, November 6, 2015

Chapter 4. The Processor Designing the datapath

COMPUTER ORGANIZATION AND DESIGN. The Hardware/Software Interface. Chapter 4. The Processor: A Based on P&H

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 4. The Processor

Chapter 4 The Processor 1. Chapter 4A. The Processor

The Processor (1) Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

Systems Architecture

LECTURE 3: THE PROCESSOR

COMPUTER ORGANIZATION AND DESIGN

Pipelining Analogy. Pipelined laundry: overlapping execution. Parallelism improves performance. Four loads: Non-stop: Speedup = 8/3.5 = 2.3.

COMPUTER ORGANIZATION AND DESIGN

Chapter 4. The Processor. Computer Architecture and IC Design Lab

MIPS Pipelining. Computer Organization Architectures for Embedded Computing. Wednesday 8 October 14

ECE232: Hardware Organization and Design

Introduction. Datapath Basics

ECE260: Fundamentals of Computer Engineering

Full Datapath. Chapter 4 The Processor 2

Processor (II) - pipelining. Hwansoo Han

Determined by ISA and compiler. We will examine two MIPS implementations. A simplified version A more realistic pipelined version

4. The Processor Computer Architecture COMP SCI 2GA3 / SFWR ENG 2GA3. Emil Sekerinski, McMaster University, Fall Term 2015/16

TDT4255 Computer Design. Lecture 4. Magnus Jahre. TDT4255 Computer Design

EIE/ENE 334 Microprocessors

COMPUTER ORGANIZATION AND DESI

Computer Architecture Computer Science & Engineering. Chapter 4. The Processor BK TP.HCM

Chapter 4. The Processor

ECE260: Fundamentals of Computer Engineering

Chapter 4. The Processor

Computer Organization and Structure. Bing-Yu Chen National Taiwan University

Chapter 4. The Processor

Full Datapath. Chapter 4 The Processor 2

Full Datapath. CSCI 402: Computer Architectures. The Processor (2) 3/21/19. Fengguang Song Department of Computer & Information Science IUPUI

The MIPS Processor Datapath

Lecture Topics. Announcements. Today: Single-Cycle Processors (P&H ) Next: continued. Milestone #3 (due 2/9) Milestone #4 (due 2/23)

CSCI 402: Computer Architectures. Fengguang Song Department of Computer & Information Science IUPUI. Today s Content

Introduction. Chapter 4. Instruction Execution. CPU Overview. University of the District of Columbia 30 September, Chapter 4 The Processor 1

ELEC 5200/6200 Computer Architecture and Design Spring 2017 Lecture 4: Datapath and Control

LECTURE 5. Single-Cycle Datapath and Control

Department of Computer and IT Engineering University of Kurdistan. Computer Architecture Pipelining. By: Dr. Alireza Abdollahpouri

Lecture 3: The Processor (Chapter 4 of textbook) Chapter 4.1

Computer Architecture Computer Science & Engineering. Chapter 4. The Processor BK TP.HCM

CSEE 3827: Fundamentals of Computer Systems

Inf2C - Computer Systems Lecture Processor Design Single Cycle

5 th Edition. The Processor We will examine two MIPS implementations A simplified version A more realistic pipelined version

ENGN1640: Design of Computing Systems Topic 04: Single-Cycle Processor Design

Single Cycle Datapath

Improving Performance: Pipelining

CPE 335 Computer Organization. Basic MIPS Architecture Part I

Single Cycle Datapath

ECE 154A Introduction to. Fall 2012

Data paths for MIPS instructions

Chapter 5: The Processor: Datapath and Control

Topic #6. Processor Design

Chapter 4. The Processor. Jiang Jiang

COMP2611: Computer Organization. The Pipelined Processor

Computer Organization and Structure

CENG 3420 Lecture 06: Datapath

ECE260: Fundamentals of Computer Engineering

1 Hazards COMP2611 Fall 2015 Pipelined Processor

CO Computer Architecture and Programming Languages CAPL. Lecture 18 & 19

The Processor (3) Jinkyu Jeong Computer Systems Laboratory Sungkyunkwan University

CPU Organization (Design)

Systems Architecture I

Chapter 4 (Part II) Sequential Laundry

The Processor: Datapath & Control

EECS150 - Digital Design Lecture 10- CPU Microarchitecture. Processor Microarchitecture Introduction

Review: Abstract Implementation View

CC 311- Computer Architecture. The Processor - Control

Computer and Information Sciences College / Computer Science Department The Processor: Datapath and Control

COSC 6385 Computer Architecture - Pipelining

CENG 3420 Computer Organization and Design. Lecture 06: MIPS Processor - I. Bei Yu

COMP303 - Computer Architecture Lecture 8. Designing a Single Cycle Datapath

Chapter 4. The Processor

Mark Redekopp and Gandhi Puvvada, All rights reserved. EE 357 Unit 15. Single-Cycle CPU Datapath and Control

CS 110 Computer Architecture. Pipelining. Guest Lecture: Shu Yin. School of Information Science and Technology SIST

Pipeline Hazards. Jin-Soo Kim Computer Systems Laboratory Sungkyunkwan University

Computer Architecture. Lecture 6.1: Fundamentals of

EECS150 - Digital Design Lecture 9- CPU Microarchitecture. Watson: Jeopardy-playing Computer

THE HONG KONG UNIVERSITY OF SCIENCE & TECHNOLOGY Computer Organization (COMP 2611) Spring Semester, 2014 Final Examination

Computer and Information Sciences College / Computer Science Department Enhancing Performance with Pipelining

ECE 15B Computer Organization Spring 2011

Design of Digital Circuits 2017 Srdjan Capkun Onur Mutlu (Guest starring: Frank K. Gürkaynak and Aanjhan Ranganathan)

CSSE232 Computer Architecture I. Datapath

EC 413 Computer Organization - Fall 2017 Problem Set 3 Problem Set 3 Solution

Lecture 7 Pipelining. Peng Liu.

EECS 151/251A Fall 2017 Digital Design and Integrated Circuits. Instructor: John Wawrzynek and Nicholas Weaver. Lecture 13 EE141

Lecture 2: Processor and Pipelining 1

Chapter 4 The Processor 1. Chapter 4B. The Processor

Advanced Computer Architecture

Transcription:

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition Chapter 4 The Processor

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition The Processor - Introduction - Logic Design Conventions - Building a Datapath - A Simple Implementation Scheme - An Overview of Pipelining - Pipeline Summary

Introduction CPU performance factors Instruction count Determined by ISA and compiler CPI and Cycle time Determined by CPU hardware We will examine two MIPS implementations A simplified version A more realistic pipelined version Simple subset shows most aspects Memory reference: lw, sw Arithmetic/logical: add, sub, and, or, slt Control transfer: beq, j 4.1 Introduction Chapter 4 The Processor 3

Instruction Execution PC instruction memory, fetch instruction Register numbers register file, read registers Depending on instruction class Use ALU to calculate Arithmetic result for addition/subtraction Memory address for load/store Branch target address Access data memory for load/store PC target address or PC + 4 4.1 Introduction Chapter 4 The Processor 4

CPU Overview 4.1 Introduction An abstract view of the implementation of the MIPS subset Chapter 4 The Processor 5

Multiplexers Can t just join wires together Use multiplexers 4.1 Introduction Chapter 4 The Processor 6

Control 4.1 Introduction The basic implementation of the MIPS subset, including the necessary multiplexors and control lines Chapter 4 The Processor 7

Logic Design Basics Information encoded in binary Low voltage = 0, High voltage = 1 One wire per bit Multi-bit data encoded on multi-wire buses Combinational element Operate on data Output is a function of input State (Sequential) elements Store information 4.2 Logic Design Conventions Chapter 4 The Processor 8

Combinational Elements AND-gate Y = A & B A B Y Adder Y = A + B A B + Y 4.2 Logic Design Conventions Multiplexer Y = S? I 1 : I 0 Arithmetic/Logic Unit Y = F(A, B) I 0 I 1 M u x Y A ALU Y S B F Chapter 4 The Processor 9

Multiplexors 4.2 Logic Design Conventions A two-input multiplexor has two data inputs (A and B) labeled 0 and 1, one selector input (S), and an output C. Chapter 4 The Processor 10

Multiplexors A multiplexor is arrayed 32 times to perform a selection between two 32-bit inputs. One data selection signal used for all 32 1-bit multiplexors. 4.2 Logic Design Conventions Chapter 4 The Processor 11

Sequential Elements Register: stores data in a circuit Uses a clock signal to determine when to update the stored value Edge-triggered: update when Clk changes from 0 to 1 4.2 Logic Design Conventions D Clk Q Clk D Q Chapter 4 The Processor 12

Sequential Elements Register with write control Only updates on clock edge when write control input is 1 Used when stored value is required later 4.2 Logic Design Conventions Clk D Write Clk Q Write D Q Chapter 4 The Processor 13

Building a Datapath Datapath Elements that process data and addresses in the CPU Registers, ALUs, mux s, memories, We will build a MIPS datapath incrementally Refining the overview design 4.3 Building a Datapath Chapter 4 The Processor 14

Instruction Fetch 4.3 Building a Datapath 32-bit register Increment by 4 for next instruction A portion of the datapath used for fetching instructions and incrementing the program counter. The fetched instruction is used by other parts of the datapath. Chapter 4 The Processor 15

R-Format Instructions Read two register operands Perform arithmetic/logical operation Write register result 4.3 Building a Datapath Chapter 4 The Processor 16

Load/Store Instructions Read register operands Calculate address using ALU: 32-bit register and 16-bit sign-extend offset Load: Read memory and update register Store: Write register value to memory 4.3 Building a Datapath Chapter 4 The Processor 17

Branch Instructions Read register operands Compare operands Use ALU, subtract and check Zero output Calculate target address Sign-extend displacement Shift left 2 places (x4) Add to PC + 4 Already calculated by instruction fetch 4.3 Building a Datapath Chapter 4 The Processor 18

Branch Instructions Just re-routes wires x4 4.3 Building a Datapath Sign-bit wire replicated Chapter 4 The Processor 19

Composing the Elements Simple datapath does one instruction in one clock cycle Each datapath element can only do one function at a time Hence, we need separate instruction and data memories Use multiplexers where alternate data sources are used for different instructions 4.3 Building a Datapath Chapter 4 The Processor 20

R-Type/Load/Store Datapath 4.3 Building a Datapath Chapter 4 The Processor 21

Full Datapath 4.3 Building a Datapath Chapter 4 The Processor 22

ALU Control ALU used for Load/Store: F = add Branch: F = subtract R-type: F depends on funct field ALU control Function 0000 AND 0001 OR 0010 add 0110 subtract 0111 set-on-less-than 1100 NOR 4.4 A Simple Implementation Scheme Chapter 4 The Processor 23

ALU Control Assume 2-bit ALUOp derived from opcode Combinational logic derives ALU control opcode ALUOp Operation funct ALU function ALU control lw 00 load word XXXXXX add 0010 sw 00 store word XXXXXX add 0010 beq 01 branch equal XXXXXX subtract 0110 R-type 10 add 100000 add 0010 subtract 100010 subtract 0110 AND 100100 AND 0000 OR 100101 OR 0001 set-on-less-than 101010 set-on-less-than 0111 4.4 A Simple Implementation Scheme Chapter 4 The Processor 24

The Main Control Unit Control signals derived from instruction R-type Load/ Store Branch 0 rs rt rd shamt funct 31:26 25:21 20:16 15:11 10:6 5:0 35 or 43 rs rt address 31:26 25:21 20:16 15:0 4 rs rt address 31:26 25:21 20:16 15:0 4.4 A Simple Implementation Scheme opcode always read read, except for load write for R-type and load sign-extend and add Chapter 4 The Processor 25

The Main Control Unit 4.4 A Simple Implementation Scheme Datapath with MUXs and control lines identified Chapter 4 The Processor 26

The Main Control Unit 4.4 A Simple Implementation Scheme Chapter 4 The Processor 27

Datapath With Control rs rt 4.4 A Simple Implementation Scheme rd Simple Datapath with control unit Chapter 4 The Processor 28

Datapath With Control The setting of the control lines is completely determined by the opcode fields of the instruction The first row of the table corresponds to the R-format instructions (add, sub, AND, OR, and slt). For all these instructions, the source register fields are rs and rt, and the destination register field is rd; this defines how the signals ALUSrc and RegDst are set. R-type instruction writes a register (RegWrite=1) but neither reads nor writes data memory When the Branch control signal is 0, the PC is unconditionally replaced with PC + 4; otherwise, the PC is replaced by the branch target if the Zero output of the ALU is also high The ALUOp field for R-type instructions is set to 10 to indicate that the ALU control should be generated from the funct field 4.4 A Simple Implementation Scheme Chapter 4 The Processor 29

R-Type Instruction 4.4 A Simple Implementation Scheme Chapter 4 The Processor 30

R-Type Instruction For example, add $t1, $t2, $t3 Four steps to execute the instruction in one clock cycle 1. The instruction is fetched, and the PC is incremented 2. Registers $t2 and $t3 are read from the register file. Also, the main control unit computes the setting of the control lines during this step 3. The ALU operates on the data read from the register file, using the function code (bits 5:0, funct field) to generate the ALU function 4. The result from the ALU is written into the register file using bits 15:11 of the instruction to select the destination register ($t1) 4.4 A Simple Implementation Scheme Chapter 4 The Processor 31

Load Instruction 4.4 A Simple Implementation Scheme Chapter 4 The Processor 32

Load Instruction (2) For example, lw $t1, offset($t2) Five steps to execute the instruction in one clock cycle 1. Instruction is fetched from the instruction memory and PC is Incremented 2. Register ($t2) value is read from the register file 3. ALU computes the sum of the value read from register file and sign-extended offset 4. Sum from ALU is used as the address for the data memory 5. Data from the memory unit is written into register file. Register destination is given by bits 20:16 of the instruction ($t1) 4.4 A Simple Implementation Scheme Chapter 4 The Processor 33

Branch-on-Equal Instruction 4.4 A Simple Implementation Scheme Chapter 4 The Processor 34

Branch-on-Equal Instruction(2 For example, beq $t1, $t2, offset Four steps to execute the instruction in one clock cycle 1. An instruction is fetched from instruction memory and PC is incremented 2. Registers $t1 and $t2 are read from the register file 3. ALU performs a subtract on the data values read from the register file. PC + 4 is added to signextended offset shifted left by two; result is branch target address 4. Zero result from ALU is used to decide which adder result to store into PC 4.4 A Simple Implementation Scheme Chapter 4 The Processor 35

Implementing Jumps Jump 2 address 31:26 25:0 Jump uses word address Update PC with concatenation of Top 4 bits of old PC 26-bit jump address And 00 at the right (left shift 2 bits = x4) Need an extra control signal decoded from opcode 4.4 A Simple Implementation Scheme Chapter 4 The Processor 36

Datapath With Jumps Added 4 4.4 A Simple Implementation Scheme Chapter 4 The Processor 37

Performance Issues Longest delay determines clock period Critical path: load instruction Instruction memory register file ALU data memory register file Not feasible to vary period for different instructions Violates design principle Making the common case fast We will improve performance by pipelining 4.4 A Simple Implementation Scheme Chapter 4 The Processor 38

Pipelining Analogy Pipelined laundry: overlapping execution Parallelism improves performance Four loads: Speedup = 8/3.5 = 2.3 Non-stop: Speedup = 2n/[0.5(n-1)+2] 4 = number of stages 4.5 An Overview of Pipelining Chapter 4 The Processor 39

MIPS Pipeline Five stages, one step per stage 1. IF: Instruction fetch from memory 2. ID: Instruction decode & register read 3. EX: Execute operation or calculate address 4. MEM: Access memory operand 5. WB: Write result back to register 4.5 An Overview of Pipelining Chapter 4 The Processor 40

Pipeline Performance Assume time for stages is 100ps for register read or write 200ps for other stages Compare pipelined datapath with single-cycle datapath 4.5 An Overview of Pipelining Instr Instr fetch Register read ALU op Memory access Register write Total time lw 200ps 100 ps 200ps 200ps 100 ps 800ps sw 200ps 100 ps 200ps 200ps 700ps R-format 200ps 100 ps 200ps 100 ps 600ps beq 200ps 100 ps 200ps 500ps Chapter 4 The Processor 41

Pipeline Performance Single-cycle (T c = 800ps) 4.5 An Overview of Pipelining Pipelined (T c = 200ps) Chapter 4 The Processor 42

Pipeline Speedup If all stages are balanced i.e., all take the same time Time between instructions pipelined = Time between instructions nonpipelined = 160ps Number of stages If not balanced, speedup is less = i.e., 800ps/200ps = 4 Speedup due to increased throughput Latency (time for each instruction) does not decrease 4.5 An Overview of Pipelining Chapter 4 The Processor 43

Pipelining and ISA Design MIPS ISA designed for pipelining All instructions are 32-bits Easier to fetch and decode in one cycle c.f. x86: 1- to 15-byte instructions Few and regular instruction formats Can decode and read registers in one step Load/store addressing Can calculate address in 3 rd stage, access memory in 4 th stage Alignment of memory operands Memory access takes only one cycle 4.5 An Overview of Pipelining Chapter 4 The Processor 44

Hazards Situations that prevent starting the next instruction in the next cycle Structure hazards A required resource is busy Data hazard Need to wait for previous instruction to complete its data read/write Control hazard Deciding on control action depends on previous instruction 4.5 An Overview of Pipelining Chapter 4 The Processor 45

Structure Hazards Conflict for use of a resource In MIPS pipeline with a single memory Load/store requires data access Instruction fetch would have to stall for that cycle Would cause a pipeline bubble Hence, pipelined datapaths require separate instruction/data memories Or separate instruction/data caches 4.5 An Overview of Pipelining Chapter 4 The Processor 46

Data Hazards An instruction depends on completion of data access by a previous instruction add $s0, $t0, $t1 sub $t2, $s0, $t3 4.5 An Overview of Pipelining Chapter 4 The Processor 47

Forwarding (aka Bypassing) Use result when it is computed Don t wait for it to be stored in a register Requires extra connections in the datapath 4.5 An Overview of Pipelining Chapter 4 The Processor 48

Load-Use Data Hazard Can t always avoid stalls by forwarding If value not computed when needed Can t forward backward in time! 4.5 An Overview of Pipelining Chapter 4 The Processor 49

Code Scheduling to Avoid Stalls Reorder code to avoid use of load result in the next instruction C code for A = B + E; C = B + F; stall stall lw lw $t1, 0($t0) $t2, 4($t0) add $t3, $t1, $t2 sw lw $t3, 12($t0) $t4, 8($t0) add $t5, $t1, $t4 sw $t5, 16($t0) lw lw lw $t1, 0($t0) $t2, 4($t0) $t4, 8($t0) add $t3, $t1, $t2 sw $t3, 12($t0) add $t5, $t1, $t4 sw $t5, 16($t0) 4.5 An Overview of Pipelining 13 cycles 11 cycles Chapter 4 The Processor 50

Control Hazards Branch determines flow of control Fetching next instruction depends on branch outcome Pipeline can t always fetch correct instruction Still working on ID stage of branch In MIPS pipeline Need to compare registers and compute target early in the pipeline Add hardware to do it in ID stage 4.5 An Overview of Pipelining Chapter 4 The Processor 51

Stall on Branch Wait until branch outcome determined before fetching next instruction 4.5 An Overview of Pipelining Chapter 4 The Processor 52

Branch Prediction Longer pipelines can t readily determine branch outcome early Stall penalty becomes unacceptable Predict outcome of branch Only stall if prediction is wrong In MIPS pipeline Can predict branches not taken Fetch instruction after branch, with no delay 4.5 An Overview of Pipelining Chapter 4 The Processor 53

MIPS with Predict Not Taken Prediction correct 4.5 An Overview of Pipelining Prediction incorrect Chapter 4 The Processor 54

More-Realistic Branch Prediction Static branch prediction Based on typical branch behavior Example: loop and if-statement branches Predict backward branches taken Predict forward branches not taken Dynamic branch prediction Hardware measures actual branch behavior e.g., record recent history of each branch Assume future behavior will continue the trend When wrong, stall while re-fetching, and update history 4.5 An Overview of Pipelining Chapter 4 The Processor 55

Pipeline Summary The BIG Picture Pipelining improves performance by increasing instruction throughput Executes multiple instructions in parallel Each instruction has the same latency Subject to hazards Structure, data, control Instruction set design affects complexity of pipeline implementation 4.8 Pipeline Summary Chapter 4 The Processor 56

Acknowledgement The slides are adopted from Computer Organization and Design, 5th Edition by David A. Patterson and John L. Hennessy 2014, published by MK (Elsevier) Chapter 4 The Processor 58