CSCI 4717 Computer Architecture

Similar documents
Organisasi Sistem Komputer

Computer Organization. Chapter 16

Chapter 18 Parallel Processing

Chapter 18. Parallel Processing. Yonsei University

Parallel Processing. Computer Architecture. Computer Architecture. Outline. Multiple Processor Organization

Non-uniform memory access machine or (NUMA) is a system where the memory access time to any region of memory is not the same for all processors.

10 Parallel Organizations: Multiprocessor / Multicore / Multicomputer Systems

Lecture 9: MIMD Architectures

Lecture 9: MIMD Architectures

Comp. Org II, Spring

Parallel Processing & Multicore computers

Multi-Processor / Parallel Processing

Comp. Org II, Spring

Lecture 9: MIMD Architecture

Chapter 17 - Parallel Processing

Multiprocessors & Thread Level Parallelism

Memory Systems in Pipelined Processors

MULTIPROCESSORS AND THREAD-LEVEL. B649 Parallel Architectures and Programming

MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming

Computer parallelism Flynn s categories

06-Dec-17. Credits:4. Notes by Pritee Parwekar,ANITS 06-Dec-17 1

Multiprocessors and Thread-Level Parallelism. Department of Electrical & Electronics Engineering, Amrita School of Engineering

SMD149 - Operating Systems - Multiprocessing

Overview. SMD149 - Operating Systems - Multiprocessing. Multiprocessing architecture. Introduction SISD. Flynn s taxonomy

Module 5 Introduction to Parallel Processing Systems

Computer Systems Architecture

Lecture 24: Virtual Memory, Multiprocessors

Computer Systems Architecture

Multiple Issue and Static Scheduling. Multiple Issue. MSc Informatics Eng. Beyond Instruction-Level Parallelism

Characteristics of Mult l ip i ro r ce c ssors r

Multiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types

A Multiprocessor system generally means that more than one instruction stream is being executed in parallel.

Computer Architecture

Computer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture. Lecture 9: Multiprocessors

Operating System 4 THREADS, SMP AND MICROKERNELS

Chap. 4 Multiprocessors and Thread-Level Parallelism

Handout 3 Multiprocessor and thread level parallelism

FLYNN S TAXONOMY OF COMPUTER ARCHITECTURE

Lecture Topics. Announcements. Today: Advanced Scheduling (Stallings, chapter ) Next: Deadlock (Stallings, chapter

Lecture 7: Parallel Processing

Operating Systems: Internals and Design Principles, 7/E William Stallings. Chapter 1 Computer System Overview

Chapter Seven. Idea: create powerful computers by connecting many smaller ones

EITF20: Computer Architecture Part 5.2.1: IO and MultiProcessor

Dr e v prasad Dt

Advanced Parallel Architecture. Annalisa Massini /2017

Multiprocessors. Flynn Taxonomy. Classifying Multiprocessors. why would you want a multiprocessor? more is better? Cache Cache Cache.

Chapter 5 Thread-Level Parallelism. Abdullah Muzahid

Introduction CHAPTER. Exercises

Parallel Computers. CPE 631 Session 20: Multiprocessors. Flynn s Tahonomy (1972) Why Multiprocessors?

Processor Architecture and Interconnect

Computer-System Organization (cont.)

Chapter 5. Multiprocessors and Thread-Level Parallelism

COSC 6385 Computer Architecture - Multi Processor Systems

Lecture 8: RISC & Parallel Computers. Parallel computers

Computer Organization

Shared Symmetric Memory Systems

Chapter 9 Multiprocessors

Flynn s Classification

Processor Architecture

Distributed Systems. Thoai Nam Faculty of Computer Science and Engineering HCMC University of Technology

1. Memory technology & Hierarchy

Chapter 18: Database System Architectures.! Centralized Systems! Client--Server Systems! Parallel Systems! Distributed Systems!

Chapter 5. Thread-Level Parallelism

Introduction to Parallel Processing

Parallel Architecture. Hwansoo Han

Lecture 7: Parallel Processing

INSTITUTO SUPERIOR TÉCNICO. Architectures for Embedded Computing

Computer Architecture Lecture 27: Multiprocessors. Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 4/6/2015

18-447: Computer Architecture Lecture 30B: Multiprocessors. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 4/22/2013

Lecture 24: Memory, VM, Multiproc

Lecture 30: Multiprocessors Flynn Categories, Large vs. Small Scale, Cache Coherency Professor Randy H. Katz Computer Science 252 Spring 1996

Chapter 8 : Multiprocessors

Parallel Computer Architecture Spring Shared Memory Multiprocessors Memory Coherence

CS370 Operating Systems

Page 1. SMP Review. Multiprocessors. Bus Based Coherence. Bus Based Coherence. Characteristics. Cache coherence. Cache coherence

Non-Uniform Memory Access (NUMA) Architecture and Multicomputers

Multiple Processor Systems. Lecture 15 Multiple Processor Systems. Multiprocessor Hardware (1) Multiprocessors. Multiprocessor Hardware (2)

Issues in Multiprocessors

Chapter 20: Database System Architectures

Multiprocessors 1. Outline

Non-Uniform Memory Access (NUMA) Architecture and Multicomputers

Issues in Multiprocessors

EE382 Processor Design. Processor Issues for MP

RAID 0 (non-redundant) RAID Types 4/25/2011

Multiprocessors - Flynn s Taxonomy (1966)

MULTIPROCESSORS. Characteristics of Multiprocessors. Interconnection Structures. Interprocessor Arbitration

Chapter 5. Multiprocessors and Thread-Level Parallelism

CSC 2405: Computer Systems II

CMPE 511 TERM PAPER. Distributed Shared Memory Architecture. Seda Demirağ

Chapter 1 Computer System Overview

Chapter-4 Multiprocessors and Thread-Level Parallelism

Non-Uniform Memory Access (NUMA) Architecture and Multicomputers

Operating Systems Fundamentals. What is an Operating System? Focus. Computer System Components. Chapter 1: Introduction

Parallel Processors. The dream of computer architects since 1950s: replicate processors to add performance vs. design a faster processor

3/24/2014 BIT 325 PARALLEL PROCESSING ASSESSMENT. Lecture Notes:

Chapter 1: Introduction

Serial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing

Current Topics in OS Research. So, what s hot?

Parallel Computer Architectures. Lectured by: Phạm Trần Vũ Prepared by: Thoại Nam

Advanced OpenMP. Lecture 3: Cache Coherency

Transcription:

CSCI 4717/5717 Computer Architecture Topic: Symmetric Multiprocessors & Clusters Reading: Stallings, Sections 18.1 through 18.4 Classifications of Parallel Processing M. Flynn classified types of parallel processing in 1972 ("Some Computer Organizations and Their Effectiveness", IEEE Transactions on Computers) Types of Parallel Processor Systems (Figure 18.2) Single instruction, single data stream Single instruction, multiple data stream Multiple instruction, single data stream Multiple instruction, multiple data stream Parallel Processing Page 1 of 63 Parallel Processing Page 2 of 63 Classifications of Parallel Processing Single Instruction, Single Data Stream (SISD) Single processor operates on a single instruction stream from a single memory (Uniprocessor) Single Instruction, Multiple Data Stream (SIMD) Lockstep operation of multiple processors on single instruction memory with one data memory per processing element. (Vector/array processing) Classifications of Parallel Processing Multiple Instruction, Single Data Stream (MISD) Multiple processors execute different sequences of instructions on a single data set. Not commercially implemented Multiple Instruction, Multiple Data Stream (MIMD) A set of processors simultaneously execute different instructions on different data sets. Parallel Processing Page 3 of 63 Parallel Processing Page 4 of 63 Classifications of Parallel Processing Multiple Instruction, Multiple Data Stream Processors are general purpose Each processor should be able to complete process by themselves Communications methods Through shared memory ("Tightly Coupled") Symmetric multiprocessor (SMP) memory access times are consistent for all processors Nonuniform Memory Access (NUMA) memory access times may differ Cluster Either through fixed connections or a network ("Loosely Coupled") Parallel Processing Page 5 of 63 Parallel Processing Page 6 of 63 1

Symmetric Multiprocessors (SMP) A stand alone computer with the following traits Two or more similar processors of comparable capacity Processors share same memory and I/O Processors are connected by a bus or other internal connection Memory access time is approximately the same for each processor Symmetric Multiprocessors All processors share access to I/O through either: same channels different channels providing paths to same devices All processors can perform the same functions (hence symmetric) System controlled by integrated operating system providing interaction between processors Interaction at job, task, file and data element levels Parallel Processing Page 7 of 63 Parallel Processing Page 8 of 63 Integrated Operating System O/S for SMP is NOT like clusters/loosely coupled where communication usually is at file level Can be a high degree of interaction between processes O/S schedules processes or threads across all processors SMP Advantages Advantages only realized if O/S can provide parallelism Performance, but only if some work can be done in parallel Availability/reliability Since all processors can perform the same functions, failure of a single processor does not halt the system Incremental growth User can enhance performance by adding additional processors Scaling Vendors can offer range of products based on number of processors Transparent to user User only sees improvement in performance Parallel Processing Page 9 of 63 Parallel Processing Page 10 of 63 Organization of Tightly Coupled Multiprocessor Individual processors are self-contained, i.e., they have their own control unit, ALU, registers, one or more levels of cache, and private main memory Access to shared memory and I/O devices through some interconnection network Processors communicate through memory in common data area Organization of Tightly Coupled Multiprocessor Memory is often organized to provide simultaneous access to separate blocks of memory Bus Time-shared or common bus Central controller (arbitrator) Multiport memory Parallel Processing Page 11 of 63 Parallel Processing Page 12 of 63 2

Organization of Tightly Coupled Multiprocessor Time Shared Bus Structure and interface similar to single processor system (control, address, and data) Similar to DMA with single processor Following features provided Addressing - distinguish modules on bus Arbitration Any module can be temporary master Must have an arbitration scheme Time sharing - if one module has the bus, others must wait and may have to suspend Now have multiple processors as well as multiple I/O modules Parallel Processing Page 13 of 63 Parallel Processing Page 14 of 63 Time Shared Bus Time Shared Bus Advantages Simplicity not only is it easy to understand, form already used with DMA Flexibility adding processor involves simple addition of processor to bus Reliability As long as arbitration does not involve single controller, then there is no single point of failure Parallel Processing Page 15 of 63 Parallel Processing Page 16 of 63 Time Shared Bus Disadvantages Waiting for bus creates bottleneck Can be helped with individual caches Usually L1 and L2 Cache coherence policy must be used (usually hardware) Multiport Memory Direct independent access of memory modules by each processor and I/O module Logic internal to memory required to resolve conflicts Little or no modification to processors used to single processor applications or I/O modules required Parallel Processing Page 17 of 63 Parallel Processing Page 18 of 63 3

Multiport Memory Multiport Memory Advantages Removing bus access bottleneck Dedicate portions of memory to only one processor Better security Better recovery from faults Disadvantages Complex memory logic More PCB wiring Write through policy should be used for caches Parallel Processing Page 19 of 63 Parallel Processing Page 20 of 63 Central Control Unit Functions Funnels separate data streams between independent modules Can buffer requests Performs arbitration and timing Pass status and control Perform cache update alerting Central Control Unit Uses same control, addressing, and data interfaces as typical processor, therefore, interfaces to modules remain the same Disadvantages Very complex control unit Control unit is possible bottleneck Parallel Processing Page 21 of 63 Parallel Processing Page 22 of 63 SMP Operating System To user, it appears as if there is a single O/S, i.e., single processor multiprogramming system User should be able to create multithreaded processes without needing to know whether one processor or more will be used SMP Operating System Design Issues Simultaneous concurrent processes O/S routines should be reentrant O/S tables and other management structures must be expanded to handle multiple processes and processors Scheduling More than just order now, also which processor gets a process Any processor should be capable of scheduling too Parallel Processing Page 23 of 63 Parallel Processing Page 24 of 63 4

SMP Operating System Design Issues Synchronization scheduling of resources now more than just for processes but also for processors Memory management Shared page replacement strategy Must understand and take advantage of memory hardware Reliability and fault tolerance Must be able to handle the loss of a processor without taking down other processors. Cache Coherence One or two levels of cache typically associated with each processor this is essential for performance Problem Multiple copies of same data in different caches Can result in an inconsistent view of memory Parallel Processing Page 25 of 63 Parallel Processing Page 26 of 63 Write Policy Review Write back policy Write goes only to cache Main memory updated only when cache block is replaced Can lead to inconsistency Write through policy All writes made to cache and main memory Inconsistencies can occur unless all caches monitor memory traffic Software Solutions Compiler and operating system deal with problem Overhead transferred to compile time Design complexity transferred from hardware to software Software tends to make conservative decisions leading to inefficient cache utilization Parallel Processing Page 27 of 63 Parallel Processing Page 28 of 63 Software Solutions Marked shared variables as non-cacheable Too conservative Instructions added to enable/disable caching for variables. Then compiler can analyze code to determine safe periods for caching shared variables Hardware Solution A.K.A cache coherence protocols Dynamic recognition of potential problems at run time Because it only deals w/problem when it occurs, more efficient use of cache Transparent to programmer and compiler Methods Directory protocols Snoopy protocols Parallel Processing Page 29 of 63 Parallel Processing Page 30 of 63 5

Directory Protocols Central control Central memory controller maintains directory of: where blocks are held in which caches they are held what state the data is in Appropriate transfers are performed by controller Directory Protocols Write Process Requests to write to a line are made to controller Using directory, controller tells all other processors with copy of same data to invalidate Write is granted to requesting processor and that processor has exclusive rights to that data Request to read from another processor forces controller to issue command to processor with exclusive rights to update (write back) main memory. Parallel Processing Page 31 of 63 Parallel Processing Page 32 of 63 Directory Protocols Problems Creates central bottleneck Communication overhead Effective in large scale systems with complex interconnection schemes Snoopy Protocols Distributed control Distribute cache coherence responsibility among cache controllers Cache recognizes that a line is shared Updates announced to other caches Suited to bus based multiprocessor Problem possible to increase bus traffic to point of canceling out benefits Two types of implementations: Write Invalidate Write Update Parallel Processing Page 33 of 63 Parallel Processing Page 34 of 63 Write Invalidate (a.k.a. MESI) Multiple readers, one writer When a write is required, command is issued and all other caches of the line are invalidated Writing processor then has exclusive (cheap) access until line required by another processor A state is associated with every line Modified Exclusive Shared Invalid Write Update (a.k.a. write broadcast) Multiple readers and writers Updated word is distributed to all other processors Parallel Processing Page 35 of 63 Parallel Processing Page 36 of 63 6

Snoopy Protocols Implementations Performance of these two implementations depends on number of caches and pattern of read/writes Some systems use adaptive protocols to use both methods Write invalidate most common Used in Pentium 4 and PowerPC systems MESI Protocol Each line of a cache has associated with it two bits four states Modified line in this cache is modified and only valid in this cache Exclusive line in this cache is same as that in memory (unmodified) and not present in any other cache Shared line in this cache is same as that in memory (unmodified) and may also be present in another cache Invalid line in this cache contains bad data Write throughs from an L1 cache to an L2 cache makes it visible to the MESI protocol Parallel Processing Page 37 of 63 Parallel Processing Page 38 of 63 MESI Protocol MESI State Transition Diagram Parallel Processing Page 39 of 63 Parallel Processing Page 40 of 63 Clusters Defined a group of interconnected, whole computers working together as a unified computing resource can create the illusion of being one machine Alternative to Symmetric Multiprocessing (SMP) High performance High availability Server applications Each computer called a node Cluster Computer Architecture Figure 18.11 from Stallings, Computer Organization & Architecture Parallel Processing Page 41 of 63 Parallel Processing Page 42 of 63 7

Cluster Benefits Absolute scalability Almost limitless in terms of adding independent multiprocessing machines Incremental scalability Can start out small and build as user acquires new machines Cluster Benefits High availability Loss of one node only causes small decrement in performance Software (middleware) handles fault tolerance automatically Superior price/performance By using easily affordable building blocks, gets better performance at a lower price than a single large computer Expanding design doesn't depend on PCB redesign Parallel Processing Page 43 of 63 Parallel Processing Page 44 of 63 Cluster Configurations High-speed message link options/configurations Dedicated LAN with at least one having connection to remote client Shared LAN with other non-cluster machines Simplest way to classify clusters is based on whether computers share disk(s) No shared disk each machine has a local disk Shared disk in addition to local disk should use disk mirroring or RAID Cluster Configurations Standby Server with no Shared Disk Parallel Processing Page 45 of 63 Parallel Processing Page 46 of 63 Cluster Configurations Shared Disk Cluster Configurations Secondary server cluster functional classification Passive Standby Second computer will take over in the event of a failure on the part of the first First computer sends "heartbeat Heartbeat stops, secondary takes over Data must be shared or disks must be shared in order for secondary to take over database stuff too Active Standby Second computer participates in processing Parallel Processing Page 47 of 63 Parallel Processing Page 48 of 63 8

Active Standby Configurations Separate Server No shared disk High performance and availability Scheduling software is needed to assign client requests to servers to balance the load If a computer fails in middle of application, another can take over To do this, must have some method of copying data between at least neighboring computers Active Standby Configurations Shared nothing All computers share common RAID, but have partitions all to themselves. If one fails, the cluster is reconfigured to reallocate failed computer's partitions Shared disk All computers have access to all volumes of the same disk Must use some type of locking facility to ensure that data can be accessed by one computer at a time Parallel Processing Page 49 of 63 Parallel Processing Page 50 of 63 Comparison of Clustering Methods Table 18.2 from Stallings, Computer Organization & Architecture Cluster O/S Design Issues Failure Management Two types of management: high availability and fault tolerant High availability Independent processes If one goes down, anything in progress is lost Application layer must handle uncertainty of partially executed transactions Process is taken over by next machine Fault tolerant Redundancies Mechanisms for handling partially executed transactions Parallel Processing Page 51 of 63 Parallel Processing Page 52 of 63 Cluster O/S Design Issues Failure Management Failover -- Switching applications & data from failed system to alternative within cluster Failback -- Restoration of applications and data to original system after problem is fixed Cluster O/S Design Issues Load balancing Incremental scalability of load with changes in number of nodes Automatically include new computers in scheduling Middleware needs to recognise that processes may switch between machines Parallel Processing Page 53 of 63 Parallel Processing Page 54 of 63 9

Cluster O/S Design Issues Parallelizing Computation Single application executing in parallel on a number of machines in cluster Three general approaches to the problem: Parallelizing compiler Parallelizing application Parametric computing Parallelizing Compiler Determines at compile time which parts can be executed in parallel Split off for different computers Performance depends on compiler Parallel Processing Page 55 of 63 Parallel Processing Page 56 of 63 Parallelizing Application Application written to be parallel Message passing to move data between nodes Hard to program Performance depends on programmer Potential for best end result Parametric computing If a problem is repeated execution of algorithm on different sets of data Example: simulation using different scenarios Depends on tools to organize/manage and execute Parallel Processing Page 57 of 63 Parallel Processing Page 58 of 63 Cluster Middleware Software installed on each node to enable cluster operation: Provides high availability through load balancing and failover control Creates unified image to user Single point of entry User logs onto cluster rather than a node Single file hierarchy User sees a single file structure Single control point single node acts as the interface to the user Single virtual network visible to cluster nodes Single memory space programs are allowed to share variables across distributed memory Single job management system cluster assigns the jobs, not the user Single user interface Cluster Middleware Enhancement of availability Single I/O space I/O is accessible by any of the nodes regardless of the I/O device's location Single process space Processes are treated as if they are all operating on a single machine This means that the process identification scheme should be uniform and independent of host node Checkpointing for recovery from a failure, each process should periodically save its state and intermediate variable values for failback Process migration to allow for load balancing Parallel Processing Page 59 of 63 Parallel Processing Page 60 of 63 10

Cluster v. SMP Positive points for both Both provide multiprocessor support to high demand applications. Both available commercially SMP has been around longer SMP benefits Easier to manage and configure since it is a single machine Closer to single processor systems for which nearly all applications are written Scheduling is main difference between SMP and single-processor system Less physical space Lower power consumption Well-established Parallel Processing Page 61 of 63 Parallel Processing Page 62 of 63 Cluster benefits Superior incremental & absolute scalability Superior availability through redundancy of all components, not just processors Simpler to create from computers than SMP which is designed from PCB level With time, clusters are likely to dominate Parallel Processing Page 63 of 63 11