Parallelism and runtimes
|
|
- Georgiana Walters
- 5 years ago
- Views:
Transcription
1 Parallelism and runtimes Advanced Course on Compilers Spring 2015 (III-V): Lecture 7 Vesa Hirvisalo ESG/CSE/Aalto
2 Today Parallel platforms Concurrency Consistency Examples of parallelism Regularity of data accesses Regularity of control flow Approaches to parallelism How we map control, data, and HW Runtimes and compiling Coordination and scalability Inter-procedural aspects 2/32
3 Parallel platforms 3/32
4 Some basics Basics (as for any concurrent execution) memory synchronization scheduling What is so hard here? software threads, communication, loops, etc. hardware threads, etc. Not just parallel execution, but statically several core-memory, core-core dependences races, concurrency control communication (typically buses + memory) These things do not scale well no silver bullet (all approaches have limitations) 4/32
5 Coordinated processing Coherence and store atomicity Strict coherence Store atomicity Plain coherence Consistency Sequential consistency Violations (e.g., speculative execution) Relaxations (with and without forwarding) Processor consistency, weak consistency Synchronization Hardware implementation Software implementation 5/32
6 Performance The motivation is performance that is often linked to scalability strong vs. weak scalability horizontal vs. vertical scalability Parallelism costs a lot of overhead work may be needed to make the parallelism happen The overhead typically includes managing the multiple threads communication among threads/processes synchronization Amdahl s law remember the serial part 6/32
7 Speculation Often, speculation is needed. execution of something before knowledge of need with threads, SpMT Granularity affects a lot coarse-grained is with less overhead flexibility needed Memory issues coherence consistency Data and control exchange of data and control dependences 7/32
8 Consistency and coherence We must understand how memory works. the real memory is complex instead, we use an abstraction Consistency models are for the programmer. The basic problem is understanding memory access semantics. write (several times) read What value do we get (which write we do read)? Strict consistency any read to a memory location x returns the value stored by the most recent write operation to x This is often very impractical. real races? optimizations! 8/32
9 Consistency models Sequential consistency the result of any execution is the same as if the reads and writes occurred in some order the operations of each individual processor appear in this sequence in the order specified by its program Cache coherence distinction between local and global locally (sequentially) consistent view Processor consistency preserve order per processor Pipelined Random Access Memory Weak consistency division between synchronizing and non-sync. accesses synchronizing accesses sequentially consistent 9/32
10 Examples of parallelism 10/32
11 Regular code Consider for(i = 0; i < n; i++) C[i] = x * A[i] + B[2*i]; The code has regular data access the strides are linear having different step size does not matter regular control flow no conditionals 11/32
12 Irregular data accesses Consider for (i = 0; i < n; i++) E[C[i]] = D[A[i]] + B[i]; The code has irregular data access dependent on other data basically indirect accesses the data is not known regular control flow no conditionals 12/32
13 Irregular control flow Consider for (i = 0; i < n; i++) { x = (A[i] > 0)? y : z; C[i] = x * A[i] + B[i]; } The code has regular data access irregular control flow there is a conditional 13/32
14 Simple irregular code Consider for (i = 0; i < n; i++) if (A[i] > 0) C[i] = x * A[i] + B[i]; The code has irregular control flow there is a conditional irregular data access the conditional affects the striding the strides are punctuated 14/32
15 Code with complex irregularities Consider for (i = 0; i < n; i++) { C[i] = false; j = 0; while (!C[i] & (j < m)) if (A[i] == B[j++]) C[i] = true; } The code has irregular control flow irregular data access these two are interdependent 15/32
16 MIMD code For the simple irregular code 1 div m, n, nthr 11 loop: 2 mul t, m, tidx 12 load a, a_ptr 3 add a_ptr, t 13 br.eq a, 0, done 4 add b_ptr, t 14 load b, b_ptr 5 add c_ptr, t 15 mul t, x, a 6 sub t, nthr, 1 16 add c, t, b 7 br.neq t, tidx, ex 17 store c, c_ptr 8 rem m, n, nthr 18 done: 9 ex: 19 add a_ptr, 1 10 load x, x_ptr 20 add b_ptr, 1 21 add c_ptr, 1 22 sub m, 1 23 br.neq m, 0, loop 16/32
17 Vector-SIMD code For the simple irregular code note the vector instructions 1 load x, x_ptr 2 loop: 3 setvl vlen, n 4 load.v VA, a_ptr 5 load.v VB, b_ptr 6 cmp.gt.v VF, VA, 0 7 mul.sv VT, x, VA, VF 8 add.vv VC, VT, VB, VF 9 store.v VC, c_ptr, VF 10 add a_ptr, vlen 11 add b_ptr, vlen 12 add c_ptr, vlen 13 sub n, vlen 14 br.neq n, 0, loop 17/32
18 SIMT code For the simple irregular code note that there is no loop 1 br.gte tidx, n, done 2 add a_ptr, tidx 3 load a, a_ptr 4 br.eq a, 0, done 5 add b_ptr, tidx 6 add c_ptr, tidx 7 load x, x_ptr 8 load b, b_ptr 9 mul t, x, a 10 add c, t, b 11 store c, c_ptr 12 done: 18/32
19 Approaches to compilation 19/32
20 Level and forms of parallelism Several levels for parallelism instruction-level parallelism often the same as pipelining thread-level parallelism Threads can differ traditional threads microthreads close to vectorization Synchronization between threads between tasks 20/32
21 Multiple threads Multithreading multiple threads shared memory Multiprocessing multiple processes distributed memory In practice, sadly, these terms are often used interchangeably! compiler technology: mostly the former note that other models exist we will review the latter shortly In any case we must synchronize! 21/32
22 Structuring the threads Often, the Fork/Join model is used. each program begins with a single thread new threads are forked, when a parallel region is reached threads are joined, when the parallel region ends Note in principle, the child threads are terminated in practice, the child threads often continue Why overhead costs of thread creation and termination Note the scheduling issues gang scheduling 22/32
23 Synchronization There are several synchronization mechanisms memory sharing channel sharing Essential who waits and how different semantics and implementation hardware involvement scheduling and memory Connection to dependences they are the cause static vs. dynamic 23/32
24 Compilation and mappings In parallel processing, we typically have control (i.e., the code running) data (i.e., accesses made by the code) multiple processing units memory Mapping our problem is often finding mappings between these usually there are restrictions Approaches Occupancy-based compilation not allowing the control to diverge Dependency-based compilation not allowing the data to diverge 24/32
25 Runtimes and compiling 25/32
26 Traditional coordination and scalability Coordination Communication Synchronization Traditionally, coherency and consistency are present E.g., micro-architecture-level speculation We have the semantics at the ISA level We can (locally!) check against that E.g., SMP systems We can interrupt a thread Threads are "co-operating", not "co-performing" The memory is coherent and consistent (wrt the task) MOESI (etc.) in multicores Partial coherency and consistency => a lo tof trouble Manycores 26/32
27 Performance-oriented coordination Latency hiding is the key here We do coordination in a way that it hides the latency Effective computation goes on despite waits Examples Speculation In pipelining we start an operation before its data is there In transactional memory (TM) we roll back if we fail Cache Is a parallel structure (we check the tags in parallel) Prefetching and speculating (keeping data close to core) I/O wait We execute some other thread while waiting a device Memory wait The core is fed some other thread Note that coherency and consistency are the key here Also the reason we TM is very hard for manycores 27/32
28 Platform support The classical example POSIX and compiler runtime on top of that (threads) Coordination Synchronization Communication Resource management Memory management Resource registration and partitioning Task management Task life cycle (there can be a lot!) Task placement, priorities, and dependecies Scheduling (toward HW assisted scheduling) 28/32
29 Compiler support Programs spend their time in repeatedly executed parts Only some loops structures are suitable for modern architectures Loops need to be re-structured Loop restructuring hard A lot of complex tools (and theory) is needed Many transformations are based on separating Iterations Schedules Syntactic representations Statements can represented represented independently of their location and control Typically supported by an optimization framework Dependences are essential 29/32
30 Static analysis of calls Modern software consists of multiple compilation units a lot of small fragments (procedures, subroutines, functions, methods, operators,...) many of them are dynamic frequent call/return to a dynamic target The above makes control-flow analysis hard. We should statically understand the calling structure. As our analysis is static and the calls are dynamic, there are limits to this. However, if the code is static (e.g., no new subroutines loaded and linked), the problem is related to the code structure. 30/32
31 Caller-callee interaction Programs handle dynamic objects. We have analyses like pointer analysis shape analysis etc. Without such, efficient code is lost: what if on every line there is a call to something embedding a pointer to anything? if we have no understanding of the semantics, we cannot do any optimization In addition to understanding the call-return flow, it is important to understand what happens during the callee activations and use this information in the analysis of the caller. 31/32
32 Approaching interprocedural analysis We have two basic issue understanding the flow adding call-return to our understanding of control flow understanding the effects how the caller affects the callee how the callee affects the caller flow and context sensitivity Control-flow super graph (CFSG) is one way to solve, but region-based analysis yields faster solutions with context information included. call graphs and call strings lattices of transfer functions instead of values closure of meet repretition is needed summary analysis (forget the flow) 32/32
39 Exploring the Tradeoffs between Programmability and Efficiency in Data-Parallel Accelerators
39 Exploring the Tradeoffs between Programmability and Efficiency in Data-Parallel Accelerators YUNSUP LEE, University of California at Berkeley RIMAS AVIZIENIS, University of California at Berkeley ALEX
More informationExploring the Tradeoffs between Programmability and Efficiency in Data-Parallel Accelerators
Exploring the Tradeoffs between Programmability and Efficiency in Data-Parallel Accelerators Yunsup Lee *, Rimas Avizienis *, Alex Bishara *, Richard Xia *, Derek Lockhart, Christopher Batten, and Krste
More informationHigh Performance Computing Systems
High Performance Computing Systems Shared Memory Doug Shook Shared Memory Bottlenecks Trips to memory Cache coherence 2 Why Multicore? Shared memory systems used to be purely the domain of HPC... What
More informationronny@mit.edu www.cag.lcs.mit.edu/scale Introduction Architectures are all about exploiting the parallelism inherent to applications Performance Energy The Vector-Thread Architecture is a new approach
More informationComputer Architecture: SIMD and GPUs (Part I) Prof. Onur Mutlu Carnegie Mellon University
Computer Architecture: SIMD and GPUs (Part I) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture These slides are partly from 18-447 Spring 2013, Computer Architecture, Lecture 15: Dataflow
More informationCS 590: High Performance Computing. Parallel Computer Architectures. Lab 1 Starts Today. Already posted on Canvas (under Assignment) Let s look at it
Lab 1 Starts Today Already posted on Canvas (under Assignment) Let s look at it CS 590: High Performance Computing Parallel Computer Architectures Fengguang Song Department of Computer Science IUPUI 1
More informationDesign of Digital Circuits Lecture 21: GPUs. Prof. Onur Mutlu ETH Zurich Spring May 2017
Design of Digital Circuits Lecture 21: GPUs Prof. Onur Mutlu ETH Zurich Spring 2017 12 May 2017 Agenda for Today & Next Few Lectures Single-cycle Microarchitectures Multi-cycle and Microprogrammed Microarchitectures
More informationModule 13: INTRODUCTION TO COMPILERS FOR HIGH PERFORMANCE COMPUTERS Lecture 25: Supercomputing Applications. The Lecture Contains: Loop Unswitching
The Lecture Contains: Loop Unswitching Supercomputing Applications Programming Paradigms Important Problems Scheduling Sources and Types of Parallelism Model of Compiler Code Optimization Data Dependence
More informationComputer Architecture Lecture 27: Multiprocessors. Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 4/6/2015
18-447 Computer Architecture Lecture 27: Multiprocessors Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 4/6/2015 Assignments Lab 7 out Due April 17 HW 6 Due Friday (April 10) Midterm II April
More information18-447: Computer Architecture Lecture 30B: Multiprocessors. Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 4/22/2013
18-447: Computer Architecture Lecture 30B: Multiprocessors Prof. Onur Mutlu Carnegie Mellon University Spring 2013, 4/22/2013 Readings: Multiprocessing Required Amdahl, Validity of the single processor
More informationOccupancy-based compilation
Occupancy-based compilation Advanced Course on Compilers Spring 2015 (III-V): Lecture 10 Vesa Hirvisalo ESG/CSE/Aalto Today Threads and occupancy GPUs as the example SIMT execution warp (thread-group)
More informationComputer Architecture Lecture 16: SIMD Processing (Vector and Array Processors)
18-447 Computer Architecture Lecture 16: SIMD Processing (Vector and Array Processors) Prof. Onur Mutlu Carnegie Mellon University Spring 2014, 2/24/2014 Lab 4 Reminder Lab 4a out Branch handling and branch
More informationOnline Course Evaluation. What we will do in the last week?
Online Course Evaluation Please fill in the online form The link will expire on April 30 (next Monday) So far 10 students have filled in the online form Thank you if you completed it. 1 What we will do
More information! Readings! ! Room-level, on-chip! vs.!
1! 2! Suggested Readings!! Readings!! H&P: Chapter 7 especially 7.1-7.8!! (Over next 2 weeks)!! Introduction to Parallel Computing!! https://computing.llnl.gov/tutorials/parallel_comp/!! POSIX Threads
More information740: Computer Architecture Memory Consistency. Prof. Onur Mutlu Carnegie Mellon University
740: Computer Architecture Memory Consistency Prof. Onur Mutlu Carnegie Mellon University Readings: Memory Consistency Required Lamport, How to Make a Multiprocessor Computer That Correctly Executes Multiprocess
More informationComputer and Information Sciences College / Computer Science Department CS 207 D. Computer Architecture. Lecture 9: Multiprocessors
Computer and Information Sciences College / Computer Science Department CS 207 D Computer Architecture Lecture 9: Multiprocessors Challenges of Parallel Processing First challenge is % of program inherently
More informationGrassroots ASPLOS. can we still rethink the hardware/software interface in processors? Raphael kena Poss University of Amsterdam, the Netherlands
Grassroots ASPLOS can we still rethink the hardware/software interface in processors? Raphael kena Poss University of Amsterdam, the Netherlands ASPLOS-17 Doctoral Workshop London, March 4th, 2012 1 Current
More informationFinal Exam May 8th, 2018 Professor Krste Asanovic Name:
Notes: CS 152 Computer Architecture and Engineering Final Exam May 8th, 2018 Professor Krste Asanovic Name: This is a closed book, closed notes exam. 170 Minutes. 26 pages. Not all questions are of equal
More informationExploring different level of parallelism Instruction-level parallelism (ILP): how many of the operations/instructions in a computer program can be performed simultaneously 1. e = a + b 2. f = c + d 3.
More informationRelaxed Memory Consistency
Relaxed Memory Consistency Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu SSE3054: Multicore Systems, Spring 2017, Jinkyu Jeong (jinkyu@skku.edu)
More informationIssues in Parallel Processing. Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University
Issues in Parallel Processing Lecture for CPSC 5155 Edward Bosworth, Ph.D. Computer Science Department Columbus State University Introduction Goal: connecting multiple computers to get higher performance
More informationPOSIX Threads: a first step toward parallel programming. George Bosilca
POSIX Threads: a first step toward parallel programming George Bosilca bosilca@icl.utk.edu Process vs. Thread A process is a collection of virtual memory space, code, data, and system resources. A thread
More informationReview: Creating a Parallel Program. Programming for Performance
Review: Creating a Parallel Program Can be done by programmer, compiler, run-time system or OS Steps for creating parallel program Decomposition Assignment of tasks to processes Orchestration Mapping (C)
More informationCS 654 Computer Architecture Summary. Peter Kemper
CS 654 Computer Architecture Summary Peter Kemper Chapters in Hennessy & Patterson Ch 1: Fundamentals Ch 2: Instruction Level Parallelism Ch 3: Limits on ILP Ch 4: Multiprocessors & TLP Ap A: Pipelining
More informationPotential violations of Serializability: Example 1
CSCE 6610:Advanced Computer Architecture Review New Amdahl s law A possible idea for a term project Explore my idea about changing frequency based on serial fraction to maintain fixed energy or keep same
More informationLecture 13: Memory Consistency. + a Course-So-Far Review. Parallel Computer Architecture and Programming CMU , Spring 2013
Lecture 13: Memory Consistency + a Course-So-Far Review Parallel Computer Architecture and Programming Today: what you should know Understand the motivation for relaxed consistency models Understand the
More informationOutline. Exploiting Program Parallelism. The Hydra Approach. Data Speculation Support for a Chip Multiprocessor (Hydra CMP) HYDRA
CS 258 Parallel Computer Architecture Data Speculation Support for a Chip Multiprocessor (Hydra CMP) Lance Hammond, Mark Willey and Kunle Olukotun Presented: May 7 th, 2008 Ankit Jain Outline The Hydra
More informationParallel Computing. Hwansoo Han (SKKU)
Parallel Computing Hwansoo Han (SKKU) Unicore Limitations Performance scaling stopped due to Power consumption Wire delay DRAM latency Limitation in ILP 10000 SPEC CINT2000 2 cores/chip Xeon 3.0GHz Core2duo
More informationChapter 4: Multithreaded Programming
Chapter 4: Multithreaded Programming Silberschatz, Galvin and Gagne 2013 Chapter 4: Multithreaded Programming Overview Multicore Programming Multithreading Models Thread Libraries Implicit Threading Threading
More informationCMSC Computer Architecture Lecture 12: Multi-Core. Prof. Yanjing Li University of Chicago
CMSC 22200 Computer Architecture Lecture 12: Multi-Core Prof. Yanjing Li University of Chicago Administrative Stuff! Lab 4 " Due: 11:49pm, Saturday " Two late days with penalty! Exam I " Grades out on
More informationIntroduction to Parallel Programming
Introduction to Parallel Programming Linda Woodard CAC 19 May 2010 Introduction to Parallel Computing on Ranger 5/18/2010 www.cac.cornell.edu 1 y What is Parallel Programming? Using more than one processor
More informationIntroduction to Parallel Computing
Portland State University ECE 588/688 Introduction to Parallel Computing Reference: Lawrence Livermore National Lab Tutorial https://computing.llnl.gov/tutorials/parallel_comp/ Copyright by Alaa Alameldeen
More informationReplication of Data. Data-Centric Consistency Models. Reliability vs. Availability
CIS 505: Software Systems Lecture Note on Consistency and Replication Instructor: Insup Lee Department of Computer and Information Science University of Pennsylvania CIS 505, Spring 2007 Replication of
More informationMULTIPROCESSORS AND THREAD-LEVEL. B649 Parallel Architectures and Programming
MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM B649 Parallel Architectures and Programming Motivation behind Multiprocessors Limitations of ILP (as already discussed) Growing interest in servers and server-performance
More informationELE 455/555 Computer System Engineering. Section 4 Parallel Processing Class 1 Challenges
ELE 455/555 Computer System Engineering Section 4 Class 1 Challenges Introduction Motivation Desire to provide more performance (processing) Scaling a single processor is limited Clock speeds Power concerns
More informationPerformance of Computer Systems. CSE 586 Computer Architecture. Review. ISA s (RISC, CISC, EPIC) Basic Pipeline Model.
Performance of Computer Systems CSE 586 Computer Architecture Review Jean-Loup Baer http://www.cs.washington.edu/education/courses/586/00sp Performance metrics Use (weighted) arithmetic means for execution
More informationMULTIPROCESSORS AND THREAD-LEVEL PARALLELISM. B649 Parallel Architectures and Programming
MULTIPROCESSORS AND THREAD-LEVEL PARALLELISM B649 Parallel Architectures and Programming Motivation behind Multiprocessors Limitations of ILP (as already discussed) Growing interest in servers and server-performance
More informationIntroduction to Parallel Programming Models
Introduction to Parallel Programming Models Tim Foley Stanford University Beyond Programmable Shading 1 Overview Introduce three kinds of parallelism Used in visual computing Targeting throughput architectures
More informationCurriculum 2013 Knowledge Units Pertaining to PDC
Curriculum 2013 Knowledge Units Pertaining to C KA KU Tier Level NumC Learning Outcome Assembly level machine Describe how an instruction is executed in a classical von Neumann machine, with organization
More informationExample of a Parallel Algorithm
-1- Part II Example of a Parallel Algorithm Sieve of Eratosthenes -2- -3- -4- -5- -6- -7- MIMD Advantages Suitable for general-purpose application. Higher flexibility. With the correct hardware and software
More informationPortland State University ECE 588/688. Graphics Processors
Portland State University ECE 588/688 Graphics Processors Copyright by Alaa Alameldeen 2018 Why Graphics Processors? Graphics programs have different characteristics from general purpose programs Highly
More informationA Study of High Performance Computing and the Cray SV1 Supercomputer. Michael Sullivan TJHSST Class of 2004
A Study of High Performance Computing and the Cray SV1 Supercomputer Michael Sullivan TJHSST Class of 2004 June 2004 0.1 Introduction A supercomputer is a device for turning compute-bound problems into
More informationCMSC 714 Lecture 4 OpenMP and UPC. Chau-Wen Tseng (from A. Sussman)
CMSC 714 Lecture 4 OpenMP and UPC Chau-Wen Tseng (from A. Sussman) Programming Model Overview Message passing (MPI, PVM) Separate address spaces Explicit messages to access shared data Send / receive (MPI
More informationData/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP)
Data/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP) Hydra is a 4-core Chip Multiprocessor (CMP) based microarchitecture/compiler effort at Stanford that provides hardware/software
More informationIntroduction to OpenMP. OpenMP basics OpenMP directives, clauses, and library routines
Introduction to OpenMP Introduction OpenMP basics OpenMP directives, clauses, and library routines What is OpenMP? What does OpenMP stands for? What does OpenMP stands for? Open specifications for Multi
More informationLecture 25: Board Notes: Threads and GPUs
Lecture 25: Board Notes: Threads and GPUs Announcements: - Reminder: HW 7 due today - Reminder: Submit project idea via (plain text) email by 11/24 Recap: - Slide 4: Lecture 23: Introduction to Parallel
More informationComputer Architecture: Parallel Processing Basics. Prof. Onur Mutlu Carnegie Mellon University
Computer Architecture: Parallel Processing Basics Prof. Onur Mutlu Carnegie Mellon University Readings Required Hill, Jouppi, Sohi, Multiprocessors and Multicomputers, pp. 551-560 in Readings in Computer
More informationThe Art of Parallel Processing
The Art of Parallel Processing Ahmad Siavashi April 2017 The Software Crisis As long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a
More informationMessage-Passing Shared Address Space
Message-Passing Shared Address Space 2 Message-Passing Most widely used for programming parallel computers (clusters of workstations) Key attributes: Partitioned address space Explicit parallelization
More informationChapter 4: Threads. Chapter 4: Threads
Chapter 4: Threads Silberschatz, Galvin and Gagne 2013 Chapter 4: Threads Overview Multicore Programming Multithreading Models Thread Libraries Implicit Threading Threading Issues Operating System Examples
More informationComputing architectures Part 2 TMA4280 Introduction to Supercomputing
Computing architectures Part 2 TMA4280 Introduction to Supercomputing NTNU, IMF January 16. 2017 1 Supercomputing What is the motivation for Supercomputing? Solve complex problems fast and accurately:
More informationProf. Hakim Weatherspoon CS 3410, Spring 2015 Computer Science Cornell University. P & H Chapter 4.10, 1.7, 1.8, 5.10, 6
Prof. Hakim Weatherspoon CS 3410, Spring 2015 Computer Science Cornell University P & H Chapter 4.10, 1.7, 1.8, 5.10, 6 Why do I need four computing cores on my phone?! Why do I need eight computing
More informationComputer Architecture
18-447 Computer Architecture CSCI-564 Advanced Computer Architecture Lecture 29: Consistency & Coherence Lecture 20: Consistency and Coherence Bo Wu Prof. Onur Mutlu Colorado Carnegie School Mellon University
More informationJukka Julku Multicore programming: Low-level libraries. Outline. Processes and threads TBB MPI UPC. Examples
Multicore Jukka Julku 19.2.2009 1 2 3 4 5 6 Disclaimer There are several low-level, languages and directive based approaches But no silver bullets This presentation only covers some examples of them is
More informationApplication Programming
Multicore Application Programming For Windows, Linux, and Oracle Solaris Darryl Gove AAddison-Wesley Upper Saddle River, NJ Boston Indianapolis San Francisco New York Toronto Montreal London Munich Paris
More informationComputer Science 146. Computer Architecture
Computer Architecture Spring 24 Harvard University Instructor: Prof. dbrooks@eecs.harvard.edu Lecture 2: More Multiprocessors Computation Taxonomy SISD SIMD MISD MIMD ILP Vectors, MM-ISAs Shared Memory
More informationApplication parallelization for multi-core Android devices
SOFTWARE & SYSTEMS DESIGN Application parallelization for multi-core Android devices Jos van Eijndhoven Vector Fabrics BV The Netherlands http://www.vectorfabrics.com MULTI-CORE PROCESSORS: HERE TO STAY
More informationComputer Architecture Crash course
Computer Architecture Crash course Frédéric Haziza Department of Computer Systems Uppsala University Summer 2008 Conclusions The multicore era is already here cost of parallelism is dropping
More informationCS 61C: Great Ideas in Computer Architecture. Amdahl s Law, Thread Level Parallelism
CS 61C: Great Ideas in Computer Architecture Amdahl s Law, Thread Level Parallelism Instructor: Alan Christopher 07/17/2014 Summer 2014 -- Lecture #15 1 Review of Last Lecture Flynn Taxonomy of Parallel
More informationData/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP)
Data/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP) A 4-core Chip Multiprocessor (CMP) based microarchitecture/compiler effort at Stanford that provides hardware/software
More informationMotivation for Parallelism. Motivation for Parallelism. ILP Example: Loop Unrolling. Types of Parallelism
Motivation for Parallelism Motivation for Parallelism The speed of an application is determined by more than just processor speed. speed Disk speed Network speed... Multiprocessors typically improve the
More informationRelaxed Memory-Consistency Models
Relaxed Memory-Consistency Models [ 9.1] In Lecture 13, we saw a number of relaxed memoryconsistency models. In this lecture, we will cover some of them in more detail. Why isn t sequential consistency
More informationRelaxed Memory-Consistency Models
Relaxed Memory-Consistency Models [ 9.1] In small multiprocessors, sequential consistency can be implemented relatively easily. However, this is not true for large multiprocessors. Why? This is not the
More informationLecture 10 Midterm review
Lecture 10 Midterm review Announcements The midterm is on Tue Feb 9 th in class 4Bring photo ID 4You may bring a single sheet of notebook sized paper 8x10 inches with notes on both sides (A4 OK) 4You may
More informationLecture 16: Recapitulations. Lecture 16: Recapitulations p. 1
Lecture 16: Recapitulations Lecture 16: Recapitulations p. 1 Parallel computing and programming in general Parallel computing a form of parallel processing by utilizing multiple computing units concurrently
More informationSerial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing
CIT 668: System Architecture Parallel Computing Topics 1. What is Parallel Computing? 2. Why use Parallel Computing? 3. Types of Parallelism 4. Amdahl s Law 5. Flynn s Taxonomy of Parallel Computers 6.
More information6.189 IAP Lecture 5. Parallel Programming Concepts. Dr. Rodric Rabbah, IBM IAP 2007 MIT
6.189 IAP 2007 Lecture 5 Parallel Programming Concepts 1 6.189 IAP 2007 MIT Recap Two primary patterns of multicore architecture design Shared memory Ex: Intel Core 2 Duo/Quad One copy of data shared among
More informationCOSC 6385 Computer Architecture - Thread Level Parallelism (I)
COSC 6385 Computer Architecture - Thread Level Parallelism (I) Edgar Gabriel Spring 2014 Long-term trend on the number of transistor per integrated circuit Number of transistors double every ~18 month
More informationLecture 24: Multiprocessing Computer Architecture and Systems Programming ( )
Systems Group Department of Computer Science ETH Zürich Lecture 24: Multiprocessing Computer Architecture and Systems Programming (252-0061-00) Timothy Roscoe Herbstsemester 2012 Most of the rest of this
More informationData/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP)
Data/Thread Level Speculation (TLS) in the Stanford Hydra Chip Multiprocessor (CMP) Hydra ia a 4-core Chip Multiprocessor (CMP) based microarchitecture/compiler effort at Stanford that provides hardware/software
More informationParallel Computing Platforms
Parallel Computing Platforms Jinkyu Jeong (jinkyu@skku.edu) Computer Systems Laboratory Sungkyunkwan University http://csl.skku.edu SSE3054: Multicore Systems, Spring 2017, Jinkyu Jeong (jinkyu@skku.edu)
More informationMultiprocessors and Locking
Types of Multiprocessors (MPs) Uniform memory-access (UMA) MP Access to all memory occurs at the same speed for all processors. Multiprocessors and Locking COMP9242 2008/S2 Week 12 Part 1 Non-uniform memory-access
More informationComputer Architecture: SIMD and GPUs (Part II) Prof. Onur Mutlu Carnegie Mellon University
Computer Architecture: SIMD and GPUs (Part II) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture These slides are partly from 18-447 Spring 2013, Computer Architecture, Lecture 19: SIMD
More informationComputer Architecture
Jens Teubner Computer Architecture Summer 2016 1 Computer Architecture Jens Teubner, TU Dortmund jens.teubner@cs.tu-dortmund.de Summer 2016 Jens Teubner Computer Architecture Summer 2016 83 Part III Multi-Core
More informationG Programming Languages Spring 2010 Lecture 13. Robert Grimm, New York University
G22.2110-001 Programming Languages Spring 2010 Lecture 13 Robert Grimm, New York University 1 Review Last week Exceptions 2 Outline Concurrency Discussion of Final Sources for today s lecture: PLP, 12
More informationHandout 3 Multiprocessor and thread level parallelism
Handout 3 Multiprocessor and thread level parallelism Outline Review MP Motivation SISD v SIMD (SIMT) v MIMD Centralized vs Distributed Memory MESI and Directory Cache Coherency Synchronization and Relaxed
More informationA common scenario... Most of us have probably been here. Where did my performance go? It disappeared into overheads...
OPENMP PERFORMANCE 2 A common scenario... So I wrote my OpenMP program, and I checked it gave the right answers, so I ran some timing tests, and the speedup was, well, a bit disappointing really. Now what?.
More informationEE/CSCI 451: Parallel and Distributed Computation
EE/CSCI 451: Parallel and Distributed Computation Lecture #7 2/5/2017 Xuehai Qian Xuehai.qian@usc.edu http://alchem.usc.edu/portal/xuehaiq.html University of Southern California 1 Outline From last class
More informationWHY PARALLEL PROCESSING? (CE-401)
PARALLEL PROCESSING (CE-401) COURSE INFORMATION 2 + 1 credits (60 marks theory, 40 marks lab) Labs introduced for second time in PP history of SSUET Theory marks breakup: Midterm Exam: 15 marks Assignment:
More informationAdministrivia. Minute Essay From 4/11
Administrivia All homeworks graded. If you missed one, I m willing to accept it for partial credit (provided of course that you haven t looked at a sample solution!) through next Wednesday. I will grade
More informationComputer Architecture Lecture 12: Out-of-Order Execution (Dynamic Instruction Scheduling)
18-447 Computer Architecture Lecture 12: Out-of-Order Execution (Dynamic Instruction Scheduling) Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 2/13/2015 Agenda for Today & Next Few Lectures
More informationParallel Programming Patterns Overview CS 472 Concurrent & Parallel Programming University of Evansville
Parallel Programming Patterns Overview CS 472 Concurrent & Parallel Programming of Evansville Selection of slides from CIS 410/510 Introduction to Parallel Computing Department of Computer and Information
More informationIntroducing the Cray XMT. Petr Konecny May 4 th 2007
Introducing the Cray XMT Petr Konecny May 4 th 2007 Agenda Origins of the Cray XMT Cray XMT system architecture Cray XT infrastructure Cray Threadstorm processor Shared memory programming model Benefits/drawbacks/solutions
More informationProgramming at Scale: Concurrency
Programming at Scale: Concurrency 1 Goal: Building Fast, Scalable Software How do we speed up software? 2 What is scalability? A system is scalable if it can easily adapt to increased (or reduced) demand
More informationCache Performance and Memory Management: From Absolute Addresses to Demand Paging. Cache Performance
6.823, L11--1 Cache Performance and Memory Management: From Absolute Addresses to Demand Paging Asanovic Laboratory for Computer Science M.I.T. http://www.csg.lcs.mit.edu/6.823 Cache Performance 6.823,
More information1. Memory technology & Hierarchy
1. Memory technology & Hierarchy Back to caching... Advances in Computer Architecture Andy D. Pimentel Caches in a multi-processor context Dealing with concurrent updates Multiprocessor architecture In
More informationMulti-core Architectures. Dr. Yingwu Zhu
Multi-core Architectures Dr. Yingwu Zhu What is parallel computing? Using multiple processors in parallel to solve problems more quickly than with a single processor Examples of parallel computing A cluster
More informationComputer Architecture: Multithreading (I) Prof. Onur Mutlu Carnegie Mellon University
Computer Architecture: Multithreading (I) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture These slides are partly from 18-742 Fall 2012, Parallel Computer Architecture, Lecture 9: Multithreading
More informationCS3350B Computer Architecture
CS3350B Computer Architecture Winter 2015 Lecture 7.2: Multicore TLP (1) Marc Moreno Maza www.csd.uwo.ca/courses/cs3350b [Adapted from lectures on Computer Organization and Design, Patterson & Hennessy,
More informationEE382N (20): Computer Architecture - Parallelism and Locality Spring 2015 Lecture 14 Parallelism in Software I
EE382 (20): Computer Architecture - Parallelism and Locality Spring 2015 Lecture 14 Parallelism in Software I Mattan Erez The University of Texas at Austin EE382: Parallelilsm and Locality, Spring 2015
More informationEffective Performance Measurement and Analysis of Multithreaded Applications
Effective Performance Measurement and Analysis of Multithreaded Applications Nathan Tallent John Mellor-Crummey Rice University CSCaDS hpctoolkit.org Wanted: Multicore Programming Models Simple well-defined
More informationWorkloads Programmierung Paralleler und Verteilter Systeme (PPV)
Workloads Programmierung Paralleler und Verteilter Systeme (PPV) Sommer 2015 Frank Feinbube, M.Sc., Felix Eberhardt, M.Sc., Prof. Dr. Andreas Polze Workloads 2 Hardware / software execution environment
More informationIntroduction to Computer Systems /18-243, fall th Lecture, Dec 1
Introduction to Computer Systems 15-213/18-243, fall 2009 24 th Lecture, Dec 1 Instructors: Roger B. Dannenberg and Greg Ganger Today Multi-core Thread Level Parallelism (TLP) Simultaneous Multi -Threading
More informationIntroduction to GPU programming with CUDA
Introduction to GPU programming with CUDA Dr. Juan C Zuniga University of Saskatchewan, WestGrid UBC Summer School, Vancouver. June 12th, 2018 Outline 1 Overview of GPU computing a. what is a GPU? b. GPU
More informationCMSC Computer Architecture Lecture 15: Memory Consistency and Synchronization. Prof. Yanjing Li University of Chicago
CMSC 22200 Computer Architecture Lecture 15: Memory Consistency and Synchronization Prof. Yanjing Li University of Chicago Administrative Stuff! Lab 5 (multi-core) " Basic requirements: out later today
More informationMultiprocessor Cache Coherence. Chapter 5. Memory System is Coherent If... From ILP to TLP. Enforcing Cache Coherence. Multiprocessor Types
Chapter 5 Multiprocessor Cache Coherence Thread-Level Parallelism 1: read 2: read 3: write??? 1 4 From ILP to TLP Memory System is Coherent If... ILP became inefficient in terms of Power consumption Silicon
More informationThe Concurrency Viewpoint
The Concurrency Viewpoint View Relationships The Concurrency Viewpoint 2 The Concurrency Viewpoint Definition: The Concurrency Viewpoint: describes the concurrency structure of the system and maps functional
More informationAnnouncements. Homework 4 out today Dec 7 th is the last day you can turn in Lab 4 and HW4, so plan ahead.
Announcements Homework 4 out today Dec 7 th is the last day you can turn in Lab 4 and HW4, so plan ahead. Thread level parallelism: Multi-Core Processors Two (or more) complete processors, fabricated on
More informationCOMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface. 5 th. Edition. Chapter 6. Parallel Processors from Client to Cloud
COMPUTER ORGANIZATION AND DESIGN The Hardware/Software Interface 5 th Edition Chapter 6 Parallel Processors from Client to Cloud Introduction Goal: connecting multiple computers to get higher performance
More informationParallel Programming Programowanie równoległe
Parallel Programming Programowanie równoległe Lecture 1: Introduction. Basic notions of parallel processing Paweł Rzążewski Grading laboratories (4 tasks, each for 3-4 weeks) total 50 points, final test
More information