EECS150 - Digital Design Lecture 09 - Parallelism

Similar documents
EECS 151/251A Fall 2017 Digital Design and Integrated Circuits. Instructor: John Wawrzynek and Nicholas Weaver. Lecture 14 EE141

EECS150 - Digital Design Lecture 24 - High-Level Design (Part 3) + ECC

Pipeline: Introduction

Module 4c: Pipelining

Pipelining. Maurizio Palesi

Lecture 19 Introduction to Pipelining

EECS150 - Digital Design Lecture 23 - High-Level Design (Part 1)

Alexandria University

Pipelining concepts The DLX architecture A simple DLX pipeline Pipeline Hazards and Solution to overcome

Chapter 4. The Processor

CPU Pipelining Issues

Computer Systems Architecture Spring 2016

Pipelining concepts The DLX architecture A simple DLX pipeline Pipeline Hazards and Solution to overcome

ELCT 501: Digital System Design

PIPELINE AND VECTOR PROCESSING

Chapter 4. The Processor

EECS150 - Digital Design Lecture 13 - Combinational Logic & Arithmetic Circuits Part 3

Suggested Readings! Recap: Pipelining improves throughput! Processor comparison! Lecture 17" Short Pipelining Review! ! Readings!

ECE260: Fundamentals of Computer Engineering

EECS150 - Digital Design Lecture 7 - Computer Aided Design (CAD) - Part II (Logic Simulation) Finite State Machine Review

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Outline. Introduction to Structured VLSI Design. Signed and Unsigned Integers. 8 bit Signed/Unsigned Integers

Typical DSP application

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 4. The Processor

COMPUTER ORGANIZATION AND DESIGN. 5 th Edition. The Hardware/Software Interface. Chapter 4. The Processor

Chapter 4. Instruction Execution. Introduction. CPU Overview. Multiplexers. Chapter 4 The Processor 1. The Processor.

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Lecture: Pipeline Wrap-Up and Static ILP

Lecture 9: Case Study MIPS R4000 and Introduction to Advanced Pipelining Professor Randy H. Katz Computer Science 252 Spring 1996

EITF20: Computer Architecture Part2.2.1: Pipeline-1

Pipelining, Instruction Level Parallelism and Memory in Processors. Advanced Topics ICOM 4215 Computer Architecture and Organization Fall 2010

What is Pipelining? RISC remainder (our assumptions)

Floating Point/Multicycle Pipelining in DLX

EECS150 - Digital Design Lecture 6 - Logic Simulation. Encoder Example

What is Pipelining? Time per instruction on unpipelined machine Number of pipe stages

3/12/2014. Single Cycle (Review) CSE 2021: Computer Organization. Single Cycle with Jump. Multi-Cycle Implementation. Why Multi-Cycle?

CO Computer Architecture and Programming Languages CAPL. Lecture 18 & 19

L19 Pipelined CPU I 1. Where are the registers? Study Chapter 6 of Text. Pipelined CPUs. Comp 411 Fall /07/07

Module 5 Introduction to Parallel Processing Systems

EE178 Spring 2018 Lecture Module 4. Eric Crabill

EE 8217 *Reconfigurable Computing Systems Engineering* Sample of Final Examination

EEL 4783: HDL in Digital System Design

Chapter 4. The Processor

Chapter 8. Pipelining

Page 1. Pipelining: Its Natural! Chapter 3. Pipelining. Pipelined Laundry Start work ASAP. Sequential Laundry A B C D. 6 PM Midnight

Early Performance-Cost Estimation of Application-Specific Data Path Pipelining

Pipelining and Exploiting Instruction-Level Parallelism (ILP)

Computer Architecture

Pipelined CPUs. Study Chapter 4 of Text. Where are the registers?

Background: Pipelining Basics. Instruction Scheduling. Pipelining Details. Idealized Instruction Data-Path. Last week Register allocation

Working on the Pipeline

Lecture 6 MIPS R4000 and Instruction Level Parallelism. Computer Architectures S

EECS150 - Digital Design Lecture 5 - Verilog Logic Synthesis

Advanced Parallel Architecture Lessons 5 and 6. Annalisa Massini /2017

What is Pipelining. work is done at each stage. The work is not finished until it has passed through all stages.

Computer Architecture. Lecture 6.1: Fundamentals of

Instruction Pipelining Review

Pipelining: Overview. CPSC 252 Computer Organization Ellen Walker, Hiram College

Advanced Synthesis Techniques

EECS150 - Digital Design Lecture 20 - Finite State Machines Revisited

CPE Computer Architecture. Appendix A: Pipelining: Basic and Intermediate Concepts

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Lecture 6: Pipelining

Modern Computer Architecture

CS 110 Computer Architecture. Pipelining. Guest Lecture: Shu Yin. School of Information Science and Technology SIST

Advanced Computer Architecture

Data Hazards Compiler Scheduling Pipeline scheduling or instruction scheduling: Compiler generates code to eliminate hazard

Memory Hierarchy. Memory Flavors Principle of Locality Program Traces Memory Hierarchies Associativity. (Study Chapter 5)

Lecture 2: Pipelining Basics. Today: chapter 1 wrap-up, basic pipelining implementation (Sections A.1 - A.4)

EECS150 - Digital Design Lecture 23 - High-level Design and Optimization 3, Parallelism and Pipelining

Multi-cycle Instructions in the Pipeline (Floating Point)

CS250 VLSI Systems Design Lecture 8: Introduction to Hardware Design Patterns

Binary Adders. Ripple-Carry Adder

Vertex Shader Design I

CS 1013 Advance Computer Architecture UNIT I

COMPUTER ORGANIZATION AND DESIGN

COMPUTER ORGANIZATION AND DESIGN

Assuming ideal conditions (perfect pipelining and no hazards), how much time would it take to execute the same program in: b) A 5-stage pipeline?

EECS150 - Digital Design Lecture 08 - Project Introduction Part 1

Pipelining and Vector Processing

The Processor Pipeline. Chapter 4, Patterson and Hennessy, 4ed. Section 5.3, 5.4: J P Hayes.

Chapter 4. The Processor

CISC 662 Graduate Computer Architecture Lecture 6 - Hazards

ECE 571 Advanced Microprocessor-Based Design Lecture 4

Unit 9 : Fundamentals of Parallel Processing

A Key Theme of CIS 371: Parallelism. CIS 371 Computer Organization and Design. Readings. This Unit: (In-Order) Superscalar Pipelines

Lecture 15: Pipelining. Spring 2018 Jason Tang

Lecture 41: Introduction to Reconfigurable Computing

CPS104 Computer Organization and Programming Lecture 19: Pipelining. Robert Wagner

Logic Synthesis. EECS150 - Digital Design Lecture 6 - Synthesis

CENG 3531 Computer Architecture Spring a. T / F A processor can have different CPIs for different programs.

EI338: Computer Systems and Engineering (Computer Architecture & Operating Systems)

ECEC 355: Pipelining

Pipelining: Hazards Ver. Jan 14, 2014

EECS150 - Digital Design Lecture 6 - Field Programmable Gate Arrays (FPGAs)

ENGN1640: Design of Computing Systems Topic 06: Advanced Processor Design

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 4. The Processor

EE/CSCI 451: Parallel and Distributed Computation

EECS150 - Digital Design Lecture 6 - Logic Simulation

Outline. EECS150 - Digital Design Lecture 6 - Field Programmable Gate Arrays (FPGAs) FPGA Overview. Why FPGAs?

Transcription:

EECS150 - Digital Design Lecture 09 - Parallelism Feb 19, 2013 John Wawrzynek Spring 2013 EECS150 - Lec09-parallel Page 1 Parallelism Parallelism is the act of doing more than one thing at a time. Optimization in hardware design often involves using parallelism to trade between cost and performance. Example, Student final grade calculation: read mt1, mt2, mt3, project; grade = 0.2 mt1 + 0.2 mt2 + 0.2 mt3 + 0.4 project; write grade; High performance hardware implementation: As many operations as possible are done in parallel. Spring 2013 EECS150 - Lec09-parallel Page 2

Parallelism Is there a lower cost hardware implementation? Different tree organization? Can factor out multiply by 0.2: How about sharing operators (multipliers and adders)? Spring 2013 EECS150 - Lec09-parallel Page 3 Time multiplex single ALU for all adds and multiplies: Attempts to minimize cost at the expense of time. Need to add extra register, muxes, control. Time-Multiplexing If we adopt above approach, we can then consider the combinational hardware circuit diagram as an abstract computation-graph. Using other primitives, other coverings are possible. This time-multiplexing covers the computation graph by performing the action of each node one at a time. (Sort of emulates it.) Spring 2013 EECS150 - Lec09-parallel Page 4

HW versus SW This time-multiplexed ALU approach is very similar to what a conventional software version would accomplish: CPUs time-multiplex function units (ALUs, etc.) add r2,r1,r3 add r2,r2,r4 mult r2,r4,r5. This model matches our tendency to express computation sequentially - even though most computations naturally contain parallelism. Our programming languages also strengthen a sequential tendency. In hardware we have the ability to exploit problem parallelism - gives us a knob to tradeoff performance & cost. Maybe best to express computations as abstract computations graphs (rather than programs ) - should lead to wider range of implementations. Note: modern processors spend much of their cost budget attempting to restore execution parallelism: super-scalar execution. Spring 2013 EECS150 - Lec09-parallel Page 5 Example: Video Codec Exploiting Parallelism in HW Separate algorithm blocks implemented in separate HW blocks, or HW is time-multiplexed. Entire operation is pipelined (with possible pipelining within the blocks). Loop unrolling used within blocks or for entire computation. Spring 2013 EECS150 - Lec09-parallel Page 6

Optimizing Iterative Computations Hardware implementations of computations almost always involves looping. Why? Is this true with software? Are there programs without loops? Maybe in through away code. We probably would not bother building such a thing into hardware, would we? (FPGA may change this.) Fact is, our computations are closely tied to loops. Almost all our HW includes some looping mechanism. What do we use looping for? Spring 2013 EECS150 - Lec09-parallel Page 7 Optimizing Iterative Computations Types of loops: 1) Looping over input data (streaming): ex: MP3 player, video compressor, music synthesizer. 2) Looping over memory data ex: vector inner product, matrix multiply, list-processing 1) & 2) are really very similar. 1) is often turned into 2) by buffering up input data, and processing offline. Even for online processing, buffers are used to smooth out temporary rate mismatches. 3) CPUs are one big loop. Instruction fetch execute Instruction fetch execute but change their personality with each iteration. 4) Others? Loops offer opportunity for parallelism by executing more than one iteration at once, using parallel iteration execution &/or pipelining Spring 2013 EECS150 - Lec09-parallel Page 8

Pipelining Principle With looping usually we are less interested in the latency of one iteration and more in the loop execution rate, or throughput. These can be different due to parallel iteration execution &/or pipelining. Pipelining review from CS61C: Analog to washing clothes: step 1: wash step 2: dry step 3: fold (20 minutes) (20 minutes) (20 minutes) 60 minutes x 4 loads 4 hours wash dry fold 20 min overlapped 2 hours Spring 2013 EECS150 - Lec09-parallel Page 9 Pipelining wash dry fold In the limit, as we increase the number of loads, the average time per load approaches 20 minutes. The latency (time from start to end) for one load = 60 min. The throughput = 3 loads/hour The pipelined throughput # of pipe stages x un-pipelined throughput. Spring 2013 EECS150 - Lec09-parallel Page 10

Pipelining General principle: Assume T=8ns T FF (setup +clk q)=1ns F = 1/9ns = 111MHz Cut the CL block into pieces (stages) and separate with registers: T = 4ns + 1ns + 4ns +1ns = 10ns F = 1/(4ns +1ns) = 200MHz Assume T1 = T2 = 4ns CL block produces a new result every 5ns instead of every 9ns. Spring 2013 EECS150 - Lec09-parallel Page 11 Limits on Pipelining Without FF overhead, throughput improvement α # of stages. After many stages are added FF overhead begins to dominate: FF overhead is the setup and clk to Q times. Other limiters to effective pipelining: clock skew contributes to clock overhead unequal stages FFs dominate cost clock distribution power consumption feedback (dependencies between loop iterations) Spring 2013 EECS150 - Lec09-parallel Page 12

Pipelining Example F(x) = y i = a x i 2 + b x i + c Computation graph: x and y are assumed to be streams Divide into 3 (nearly) equal stages. Insert pipeline registers at dashed lines. Can we pipeline basic operators? Spring 2013 EECS150 - Lec09-parallel Page 13 Possible, but usually not done. (arithmetic units can often be made sufficiently fast without internal pipelining) Example: Pipelined Adder Spring 2013 EECS150 - Lec09-parallel Page 14

Pipelining Loops with Feedback Example 1: y i = y i-1 + x i + a unpipelined version: add 1 x i +y i-1 x i+1 +y i add 2 y i y i+1 Loop carry dependency Can we cut the feedback and overlap iterations? Try putting a register after add1: add 1 x i +y i-1 x i+1 +y i add 2 y i y i+1 Can t overlap the iterations because of the dependency. The extra register doesn t help the situation (actually hurts). In general, can t pipeline feedback loops. Spring 2013 EECS150 - Lec09-parallel Page 15 Pipelining Loops with Feedback However, we can overlap the nonfeedback part of the iterations: Loop carry dependency Add is associative and communitive. Therefore we can reorder the computation to shorten the delay of the feedback path: y i = (y i-1 + x i ) + a = (a + x i ) + y i-1 add 1 x i +a x i+1 +a x i+2 +a Shorten the feedback path. add 2 y i y i+1 y i+2 Pipelining is limited to 2 stages. Spring 2013 EECS150 - Lec09-parallel Page 16

Beyond Pipelining - SIMD Parallelism An obvious way to exploit more parallelism from loops is to make multiple instances of the loop execution data-path and run them in parallel, sharing the some controller. For P instances, throughput improves by a factor of P. example: y i = f(x i ) x i x i+1 x i+2 x i+3 Usually called SIMD parallelism. Single f f f f Instruction Multiple Data y i y i+1 y i+2 y i+3 Assumes the next 4 x values available at once. The validity of this assumption depends on the ratio of f repeat rate to input rate (or memory bandwidth). Cost α P. Usually, much higher than for pipelining. However, potentially provides a high speedup. Often applied after pipelining. Limited, once again, by loop carry dependencies. Feedback translates to dependencies between parallel data-paths. Vector processors use this technique. Spring 2013 EECS150 - Lec09-parallel Page 17 SIMD Parallelism with Feedback Example, from earlier: y i = a y i-1 + x i + b In this example end up with carry ripple situation. Could employ look-ahead / parallel-prefix optimization techniques to speed up propagation. As with pipelining, this technique is most effective in the absence of a loop carry dependence. Spring 2013 EECS150 - Lec09-parallel Page 18