CS 61C: Great Ideas in Computer Architecture. The Flynn Taxonomy, Intel SIMD Instructions

Similar documents
CS 61C: Great Ideas in Computer Architecture. The Flynn Taxonomy, Intel SIMD Instructions

Review of Last Lecture. CS 61C: Great Ideas in Computer Architecture. The Flynn Taxonomy, Intel SIMD Instructions. Great Idea #4: Parallelism.

The Flynn Taxonomy, Intel SIMD Instruc(ons Miki Lus(g and Dan Garcia

CS 61C: Great Ideas in Computer Architecture. The Flynn Taxonomy, Data Level Parallelism

CS 61C: Great Ideas in Computer Architecture. The Flynn Taxonomy, Data Level Parallelism

Review: Parallelism - The Challenge. Agenda 7/14/2011

Flynn Taxonomy Data-Level Parallelism

Agenda. Review. Costs of Set-Associative Caches

Parallelism and Performance Instructor: Steven Ho

CS 61C: Great Ideas in Computer Architecture

DLP, Amdahl s Law, Intro to Multi-thread/processor systems Instructor: Nick Riasanovsky

CS 110 Computer Architecture

SIMD Programming CS 240A, 2017

An Introduction to Parallel Architectures

CS 61C: Great Ideas in Computer Architecture. Amdahl s Law, Thread Level Parallelism

ME964 High Performance Computing for Engineering Applications

Instructor: Randy H. Katz hbp://inst.eecs.berkeley.edu/~cs61c/fa13. Fall Lecture #13. Warehouse Scale Computer

Issues in Parallel Processing. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University

CS 61C: Great Ideas in Computer Architecture. Synchronization, OpenMP

ELE 455/555 Computer System Engineering. Section 4 Parallel Processing Class 1 Challenges

Adapted from instructor s. Organization and Design, 4th Edition, Patterson & Hennessy, 2008, MK]

ME964 High Performance Computing for Engineering Applications

Parallel Computing. Prof. Marco Bertini

CS 590: High Performance Computing. Parallel Computer Architectures. Lab 1 Starts Today. Already posted on Canvas (under Assignment) Let s look at it

Parallel Processing SIMD, Vector and GPU s

Static Compiler Optimization Techniques

Online Course Evaluation. What we will do in the last week?

Computer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture. Lecture 9: Multiprocessors

Data-Level Parallelism in SIMD and Vector Architectures. Advanced Computer Architectures, Laura Pozzi & Cristina Silvano

Lecture 16 SSE vectorprocessing SIMD MultimediaExtensions

Design of Digital Circuits Lecture 21: GPUs. Prof. Onur Mutlu ETH Zurich Spring May 2017

Part VII Advanced Architectures. Feb Computer Architecture, Advanced Architectures Slide 1

Parallel Systems I The GPU architecture. Jan Lemeire

CS 352H Computer Systems Architecture Exam #1 - Prof. Keckler October 11, 2007

Serial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing

Lecture 8: RISC & Parallel Computers. Parallel computers

COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 6. Parallel Processors from Client to Cloud

LECTURE 10. Pipelining: Advanced ILP

Shared-Memory Hardware

EN164: Design of Computing Systems Topic 08: Parallel Processor Design (introduction)

Computer Architecture I Midterm I (solutions)

CS152 Computer Architecture and Engineering CS252 Graduate Computer Architecture. VLIW, Vector, and Multithreaded Machines

John Wawrzynek & Nick Weaver

Lecture 4: MIPS Instruction Set

SWAR: MMX, SSE, SSE 2 Multiplatform Programming

Fundamentals of Computer Design

EC 513 Computer Architecture

Lecture 8. Vector Processing. 8.1 Flynn s taxonomy. M. J. Flynn proposed a categorization of parellel computer systems in 1966.

CISC 662 Graduate Computer Architecture Lecture 13 - CPI < 1

Computer and Machine Vision

Lecture 25: Interrupt Handling and Multi-Data Processing. Spring 2018 Jason Tang

Lec 25: Parallel Processors. Announcements

Prof. Hakim Weatherspoon CS 3410, Spring 2015 Computer Science Cornell University. P & H Chapter 4.10, 1.7, 1.8, 5.10, 6

CS 61C: Great Ideas in Computer Architecture. MIPS Instruction Formats

CSE 392/CS 378: High-performance Computing - Principles and Practice

CS 61C: Great Ideas in Computer Architecture. Cache Performance, Set Associative Caches

Exercises 6 Parallel Processors and Programs

Lecture 1: Introduction

CSCI 402: Computer Architectures

Storage I/O Summary. Lecture 16: Multimedia and DSP Architectures

COSC 6385 Computer Architecture - Thread Level Parallelism (I)

COMP 322: Fundamentals of Parallel Programming. Flynn s Taxonomy for Parallel Computers

Fundamentals of Computers Design

61C Survey. Agenda. Reference Problem. Application Example: Deep Learning 10/25/17. CS 61C: Great Ideas in Computer Architecture

Multi-cycle Instructions in the Pipeline (Floating Point)

The Processor: Instruction-Level Parallelism

UC Berkeley CS61C : Machine Structures

Jan Thorbecke

Architectures of Flynn s taxonomy -- A Comparison of Methods

Announcement. Computer Architecture (CSC-3501) Lecture 25 (24 April 2008) Chapter 9 Objectives. 9.2 RISC Machines

ECE 252 / CPS 220 Advanced Computer Architecture I. Lecture 14 Very Long Instruction Word Machines

CS 152, Spring 2011 Section 10

RAID 0 (non-redundant) RAID Types 4/25/2011

! An alternate classification. Introduction. ! Vector architectures (slides 5 to 18) ! SIMD & extensions (slides 19 to 23)

Parallel Processors. The dream of computer architects since 1950s: replicate processors to add performance vs. design a faster processor

Computer Organization and Design, 5th Edition: The Hardware/Software Interface

Most of the slides in this lecture are either from or adapted from slides provided by the authors of the textbook Computer Systems: A Programmer s

UC Berkeley CS61C : Machine Structures

CS61C : Machine Structures

Multiple Instruction Issue. Superscalars

Fundamentals of Quantitative Design and Analysis

In-order vs. Out-of-order Execution. In-order vs. Out-of-order Execution

Parallel computing and GPU introduction

Computer Organization and Components

Programmation Concurrente (SE205)

CS / ECE 6810 Midterm Exam - Oct 21st 2008

Advanced Computer Architecture

Processor Architecture and Interconnect

High Performance Computing Systems

COMP 322: Fundamentals of Parallel Programming

Lecture #6 Intro MIPS; Load & Store

Review: New-School Machine Structures. Review: Direct-Mapped Cache. TIO Dan s great cache mnemonic. Memory Access without Cache

AMD Opteron TM & PGI: Enabling the Worlds Fastest LS-DYNA Performance

Lecture 4: RISC Computers

SSE and SSE2. Timothy A. Chagnon 18 September All images from Intel 64 and IA 32 Architectures Software Developer's Manuals

CS 61C: Great Ideas in Computer Architecture (Machine Structures) Thread-Level Parallelism (TLP) and OpenMP

Pipelining and Vector Processing

Processor Performance and Parallelism Y. K. Malaiya

RISC Processors and Parallel Processing. Section and 3.3.6

CSE 160 Lecture 10. Instruction level parallelism (ILP) Vectorization

Transcription:

CS 61C: Great Ideas in Computer Architecture The Flynn Taxonomy, Intel SIMD Instructions Guest Lecturer: Alan Christopher 3/08/2014 Spring 2014 -- Lecture #19 1

Neuromorphic Chips Researchers at IBM and HRL laboratories are looking into building computer chips that attempt to mimic natural thought patterns in order to more effectively solve problems in AI. http://www.technologyreview.com/featuredstory/522476/thinking-in-silicon/ 3/08/2014 Spring 2014 -- Lecture #19 2

Review of Last Lecture Amdahl s Law limits benefits of parallelization Request Level Parallelism Handle multiple requests in parallel (e.g. web search) MapReduce Data Level Parallelism Framework to divide up data to be processed in parallel Mapper outputs intermediate key-value pairs Reducer combines intermediate values with same key 3/08/2014 Spring 2014 -- Lecture #19 3

Great Idea #4: Parallelism Software Parallel Requests Assigned to computer e.g. search Garcia Parallel Threads Assigned to core e.g. lookup, ads Parallel Instructions > 1 instruction @ one time e.g. 5 pipelined instructions Parallel Data Leverage Parallelism & Achieve High Performance We are here > 1 data item @ one time e.g. add of 4 pairs of words Hardware descriptions All gates functioning in parallel at same time Hardware Warehouse Scale Computer Core Instruction Unit(s) 3/08/2014 Spring 2014 -- Lecture #19 Cache Memory Core Functional Unit(s) A 0 +B 0 A 1 +B 1 A 2 +B 2 A 3 +B 3 Computer Memory Input/Output Core Smart Phone Logic Gates 4

Agenda Flynn s Taxonomy Administrivia Data Level Parallelism and SIMD Bonus: Loop Unrolling 3/08/2014 Spring 2014 -- Lecture #19 5

Hardware vs. Software Parallelism Choice of hardware and software parallelism are independent Concurrent software can also run on serial hardware Sequential software can also run on parallel hardware Flynn s Taxonomy is for parallel hardware 3/08/2014 Spring 2014 -- Lecture #19 6

Flynn s Taxonomy SIMD and MIMD most commonly encountered today Most common parallel processing programming style: Single Program Multiple Data ( SPMD ) Single program that runs on all processors of an MIMD Cross-processor execution coordination through conditional expressions (will see later in Thread Level Parallelism) SIMD: specialized function units (hardware), for handling lock-step calculations involving arrays Scientific computing, signal processing, multimedia (audio/video processing) 3/08/2014 Spring 2014 -- Lecture #19 7

Single Instruction/Single Data Stream Processing Unit Sequential computer that exploits no parallelism in either the instruction or data streams Examples of SISD architecture are traditional uniprocessor machines 3/08/2014 Spring 2014 -- Lecture #19 8

Multiple Instruction/Single Data Stream Exploits multiple instruction streams against a single data stream for data operations that can be naturally parallelized (e.g. certain kinds of array processors) MISD no longer commonly encountered, mainly of historical interest only 3/08/2014 Spring 2014 -- Lecture #19 9

Single Instruction/Multiple Data Stream Computer that applies a single instruction stream to multiple data streams for operations that may be naturally parallelized (e.g. SIMD instruction extensions or Graphics Processing Unit) 3/08/2014 Spring 2014 -- Lecture #19 10

Multiple Instruction/Multiple Data Stream Multiple autonomous processors simultaneously executing different instructions on different data MIMD architectures include multicore and Warehouse Scale Computers 3/08/2014 Spring 2014 -- Lecture #19 11

Agenda Flynn s Taxonomy Administrivia Data Level Parallelism and SIMD Bonus: Loop Unrolling 3/08/2014 Spring 2014 -- Lecture #19 12

Administrivia Midterm next Wednesday May have 1 double sided 8.5 x 11 sheet of hand written notes May also bring an unedited copy of the MIPS green sheet. We will provide a copy if you forget yours Proj2 (MapReduce) to be released soon Part 1 due date pushed later Work in partners, 1 of you must know Java 3/08/2014 Spring 2014 -- Lecture #19 13

Agenda Flynn s Taxonomy Administrivia Data Level Parallelism and SIMD Bonus: Loop Unrolling 3/08/2014 Spring 2014 -- Lecture #19 14

SIMD Architectures Data-Level Parallelism (DLP): Executing one operation on multiple data streams Example: Multiplying a coefficient vector by a data vector (e.g. in filtering) y[i] := c[i] x[i], 0 i<n Sources of performance improvement: One instruction is fetched & decoded for entire operation Multiplications are known to be independent Pipelining/concurrency in memory access as well 3/08/2014 Spring 2014 -- Lecture #19 Slide 15

Advanced Digital Media Boost To improve performance, Intel s SIMD instructions Fetch one instruction, do the work of multiple instructions MMX (MultiMedia extension, Pentium II processor family) SSE (Streaming SIMD Extension, Pentium III and beyond) 3/08/2014 Spring 2014 -- Lecture #19 16

Example: SIMD Array Processing for each f in array: f = sqrt(f) for each f in array { load f to the floating-point register calculate the square root write the result from the register to memory } pseudocode SISD for every 4 members in array { load 4 members to the SSE register calculate 4 square roots in one operation write the result from the register to memory SIMD } 3/08/2014 Spring 2014 -- Lecture #19 17

SSE Instruction Categories for Multimedia Support Intel processors are CISC (complicated instrs) SSE-2+ supports wider data types to allow 16 8-bit and 8 16-bit operands 3/08/2014 Spring 2014 -- Lecture #19 18

Intel Architecture SSE2+ 128-Bit SIMD Data Types 122 121 96 95 80 79 64 63 48 47 32 31 16 15 16 / 128 bits 122 121 96 95 80 79 64 63 48 47 32 31 16 15 8 / 128 bits 96 95 64 63 32 31 4 / 128 bits 64 63 2 / 128 bits Note: in Intel Architecture (unlike MIPS) a word is 16 bits Single precision FP: Double word (32 bits) Double precision FP: Quad word (64 bits) 3/08/2014 Spring 2014 -- Lecture #19 19

XMM Registers Architecture extended with eight 128-bit data registers 64-bit address architecture: available as 16 64-bit registers (XMM8 XMM15) e.g. 128-bit packed single-precision floating-point data type (doublewords), allows four single-precision operations to be performed simultaneously 127 XMM7 XMM6 XMM5 XMM4 XMM3 XMM2 XMM1 XMM0 0 3/08/2014 Spring 2014 -- Lecture #19 20

SSE/SSE2 Floating Point Instructions {SS} Scalar Single precision FP: 1 32-bit operand in a 128-bit register {PS} Packed Single precision FP: 4 32-bit operands in a 128-bit register {SD} Scalar Double precision FP: 1 64-bit operand in a 128-bit register {PD} Packed Double precision FP, or 2 64-bit operands in a 128-bit register 3/08/2014 Spring 2014 -- Lecture #19 21

SSE/SSE2 Floating Point Instructions xmm: one operand is a 128-bit SSE2 register mem/xmm: other operand is in memory or an SSE2 register {A} 128-bit operand is aligned in memory {U} means the 128-bit operand is unaligned in memory {H} means move the high half of the 128-bit operand {L} means move the low half of the 128-bit operand 3/08/2014 Spring 2014 -- Lecture #19 22

Example: Add Single Precision FP Vectors Computation to be performed: vec_res.x = v1.x + v2.x; vec_res.y = v1.y + v2.y; vec_res.z = v1.z + v2.z; vec_res.w = v1.w + v2.w; move from mem to XMM register, memory aligned, packed single precision add from mem to XMM register, packed single precision SSE Instruction Sequence: movaps address-of-v1, %xmm0 // v1.w v1.z v1.y v1.x -> xmm0 addps address-of-v2, %xmm0 move from XMM register to mem, memory aligned, packed single precision // v1.w+v2.w v1.z+v2.z v1.y+v2.y v1.x+v2.x -> xmm0 movaps %xmm0, address-of-vec_res 3/08/2014 Spring 2014 -- Lecture #19 23

Example: Image Converter (1/5) Converts BMP (bitmap) image to a YUV (color space) image format: Read individual pixels from the BMP image, convert pixels into YUV format Can pack the pixels and operate on a set of pixels with a single instruction Bitmap image consists of 8-bit monochrome pixels By packing these pixel values in a 128-bit register, we can operate on 128/8 = 16 values at a time Significant performance boost 3/08/2014 Spring 2014 -- Lecture #19 25

Example: Image Converter (2/5) FMADDPS Multiply and add packed single precision floating point instruction One of the typical operations computed in transformations (e.g. DFT or FFT) N P = f(n) x(n) n = 1 CISC Instr! 3/08/2014 Spring 2014 -- Lecture #19 26

Example: Image Converter (3/5) FP numbers f(n) and x(n) in src1 and src2; p in dest; C implementation for N = 4 (128 bits): for (int i = 0; i < 4; i++) p = p + src1[i] * src2[i]; 1) Regular x86 instructions for the inner loop: fmul [ ] faddp [ ] Instructions executed: 4 * 2 = 8 (x86) 3/08/2014 Spring 2014 -- Lecture #19 27

Example: Image Converter (4/5) FP numbers f(n) and x(n) in src1 and src2; p in dest; C implementation for N = 4 (128 bits): for (int i = 0; i < 4; i++) p = p + src1[i] * src2[i]; 2) SSE2 instructions for the inner loop: //xmm0=p, xmm1=src1[i], xmm2=src2[i] mulps %xmm1,%xmm2 // xmm2 * xmm1 -> xmm2 addps %xmm2,%xmm0 // xmm0 + xmm2 -> xmm0 Instructions executed: 2 (SSE2) 3/08/2014 Spring 2014 -- Lecture #19 28

Example: Image Converter (5/5) FP numbers f(n) and x(n) in src1 and src2; p in dest; C implementation for N = 4 (128 bits): for (int i = 0; i < 4; i++) p = p + src1[i] * src2[i]; 3) SSE5 accomplishes the same in one instruction: fmaddps %xmm0, %xmm1, %xmm2, %xmm0 // xmm2 * xmm1 + xmm0 -> xmm0 // multiply xmm1 x xmm2 packed single, // then add product packed single to sum in xmm0 3/08/2014 Spring 2014 -- Lecture #19 29

Summary Flynn Taxonomy of Parallel Architectures SIMD: Single Instruction Multiple Data MIMD: Multiple Instruction Multiple Data SISD: Single Instruction Single Data MISD: Multiple Instruction Single Data (unused) Intel SSE SIMD Instructions One instruction fetch that operates on multiple operands simultaneously 128/64 bit XMM registers 3/08/2014 Spring 2014 -- Lecture #19 30

You are responsible for the material contained on the following slides, though we may not have enough time to get to them in lecture. They have been prepared in a way that should be easily readable and the material will be touched upon in the following lecture. 3/08/2014 Spring 2014 -- Lecture #19 31

Agenda Flynn s Taxonomy Administrivia Data Level Parallelism and SIMD Bonus: Loop Unrolling 3/08/2014 Spring 2014 -- Lecture #19 32

Data Level Parallelism and SIMD SIMD wants adjacent values in memory that can be operated in parallel Usually specified in programs as loops for(i=0; i<1000; i++) x[i] = x[i] + s; How can we reveal more data level parallelism than is available in a single iteration of a loop? Unroll the loop and adjust iteration rate 3/08/2014 Spring 2014 -- Lecture #19 33

Looping in MIPS Assumptions: $s0 initial address (beginning of array) $s1 scalar value s $s2 termination address (end of array) Loop: lw $t0,0($s0) addu $t0,$t0,$s1 # add s to array element sw $t0,0($s0) # store result addiu $s0,$s0,4 # move to next element bne $s0,$s2,loop # repeat Loop if not done 3/08/2014 Spring 2014 -- Lecture #19 34

Loop Unrolled Loop: lw $t0,0($s0) addu $t0,$t0,$s1 sw $t0,0($s0) lw $t1,4($s0) addu $t1,$t1,$s1 sw $t1,4($s0) lw $t2,8($s0) addu $t2,$t2,$s1 sw $t2,8($s0) lw $t3,12($s0) addu $t3,$t3,$s1 sw $t3,12($s0) addiu $s0,$s0,16 bne $s0,$s2,loop NOTE: 1. Using different registers eliminate stalls 2. Loop overhead encountered only once every 4 data iterations 3. This unrolling works if loop_limit mod 4 = 0 3/08/2014 Spring 2014 -- Lecture #19 35

Loop Unrolled Scheduled Note: We just switched from integer instructions to single-precision FP instructions! Loop: lwc1 $t0,0($s0) lwc1 $t1,4($s0) lwc1 $t2,8($s0) lwc1 $t3,12($s0) add.s $t0,$t0,$s1 add.s $t1,$t1,$s1 add.s $t2,$t2,$s1 add.s $t3,$t3,$s1 swc1 $t0,0($s0) swc1 $t1,4($s0) swc1 $t2,8($s0) swc1 $t3,12($s0) addiu $s0,$s0,16 bne $s0,$s2,loop 4 Loads side-by-side: Could replace with 4 wide SIMD Load 4 Adds side-by-side: Could replace with 4 wide SIMD Add 4 Stores side-by-side: Could replace with 4 wide SIMD Store 3/08/2014 Spring 2014 -- Lecture #19 36

Loop Unrolling in C Instead of compiler doing loop unrolling, could do it yourself in C: for(i=0; i<1000; i++) x[i] = x[i] + s; Loop Unroll for(i=0; i<1000; i=i+4) { x[i] = x[i] + s; x[i+1] = x[i+1] + s; x[i+2] = x[i+2] + s; x[i+3] = x[i+3] + s; } What is downside of doing this in C? 3/08/2014 Spring 2014 -- Lecture #19 37

Generalizing Loop Unrolling Take a loop of n iterations and perform a k-fold unrolling of the body of the loop: First run the loop with k copies of the body floor(n/k) times To finish leftovers, then run the loop with 1 copy of the body n mod k times (Will revisit loop unrolling again when get to pipelining later in semester) 3/08/2014 Spring 2014 -- Lecture #19 38