GPUs: The Hype, The Reality, and The Future

Similar documents
Big.LITTLE Processing with ARM Cortex -A15 & Cortex-A7

NVIDIA GTX200: TeraFLOPS Visual Computing. August 26, 2008 John Tynefield

Introduction to Multicore architecture. Tao Zhang Oct. 21, 2010

GPU Background. GPU Architectures for Non-Graphics People. David Black-Schaffer David Black-Schaffer 1

Each Milliwatt Matters

CSE 591/392: GPU Programming. Introduction. Klaus Mueller. Computer Science Department Stony Brook University

Lecture 1: Gentle Introduction to GPUs

GPGPUs in HPC. VILLE TIMONEN Åbo Akademi University CSC

Finite Element Integration and Assembly on Modern Multi and Many-core Processors

CSCI 402: Computer Architectures. Parallel Processors (2) Fengguang Song Department of Computer & Information Science IUPUI.

GPUs and Emerging Architectures

Parallel Programming Principle and Practice. Lecture 9 Introduction to GPGPUs and CUDA Programming Model

Modern Processor Architectures. L25: Modern Compiler Design

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

CSE 591: GPU Programming. Introduction. Entertainment Graphics: Virtual Realism for the Masses. Computer games need to have: Klaus Mueller

Multicore computer: Combines two or more processors (cores) on a single die. Also called a chip-multiprocessor.

Introduction to Parallel and Distributed Computing. Linh B. Ngo CPSC 3620

GPU Fundamentals Jeff Larkin November 14, 2016

Tesla Architecture, CUDA and Optimization Strategies

CS427 Multicore Architecture and Parallel Computing

Parallel Computing: Parallel Architectures Jin, Hai

Graphics Processor Acceleration and YOU

High Performance Computing on GPUs using NVIDIA CUDA

Introduction: Modern computer architecture. The stored program computer and its inherent bottlenecks Multi- and manycore chips and nodes

COMPUTING ELEMENT EVOLUTION AND ITS IMPACT ON SIMULATION CODES

CPU-GPU Heterogeneous Computing

Intel Many Integrated Core (MIC) Matt Kelly & Ryan Rawlins

GPU Architecture. Alan Gray EPCC The University of Edinburgh

Integrating CPU and GPU, The ARM Methodology. Edvard Sørgård, Senior Principal Graphics Architect, ARM Ian Rickards, Senior Product Manager, ARM

Current Trends in Computer Graphics Hardware

Trends in the Infrastructure of Computing

Introduction to CUDA Algoritmi e Calcolo Parallelo. Daniele Loiacono

General Purpose GPU Computing in Partial Wave Analysis

Amber Baruffa Vincent Varouh

Introduction to GPGPU and GPU-architectures

Fundamentals of Quantitative Design and Analysis

Copyright 2012, Elsevier Inc. All rights reserved.

GPUs and GPGPUs. Greg Blanton John T. Lubia

ECE 574 Cluster Computing Lecture 18

Modern Processor Architectures (A compiler writer s perspective) L25: Modern Compiler Design

EE 7722 GPU Microarchitecture. Offered by: Prerequisites By Topic: Text EE 7722 GPU Microarchitecture. URL:

HiPANQ Overview of NVIDIA GPU Architecture and Introduction to CUDA/OpenCL Programming, and Parallelization of LDPC codes.

Module 18: "TLP on Chip: HT/SMT and CMP" Lecture 39: "Simultaneous Multithreading and Chip-multiprocessing" TLP on Chip: HT/SMT and CMP SMT

Real-Time Rendering Architectures

Introduc)on to Xeon Phi

Fundamental CUDA Optimization. NVIDIA Corporation

ECSE 425 Lecture 25: Mul1- threading

Multimedia in Mobile Phones. Architectures and Trends Lund

Higher Level Programming Abstractions for FPGAs using OpenCL

Scientific Computing on GPUs: GPU Architecture Overview

Introduction to GPU hardware and to CUDA

CUDA Programming Model

Lecture 1: Introduction

HPC with Multicore and GPUs

Introduc)on to Xeon Phi

Trends in HPC (hardware complexity and software challenges)

Complexity and Advanced Algorithms. Introduction to Parallel Algorithms

Introduction to Xeon Phi. Bill Barth January 11, 2013

Technology for a better society. hetcomp.com

This Unit: Putting It All Together. CIS 501 Computer Architecture. What is Computer Architecture? Sources

Arm s Latest CPU for Laptop-Class Performance

CSC573: TSHA Introduction to Accelerators

CPU Architecture Overview. Varun Sampath CIS 565 Spring 2012

Chapter 5. Introduction ARM Cortex series

Computer Architecture

Fundamental CUDA Optimization. NVIDIA Corporation

This Unit: Putting It All Together. CIS 371 Computer Organization and Design. Sources. What is Computer Architecture?

THE FUTURE OF GPU DATA MANAGEMENT. Michael Wolfe, May 9, 2017

EE382N (20): Computer Architecture - Parallelism and Locality Spring 2015 Lecture 09 GPUs (II) Mattan Erez. The University of Texas at Austin

Computing architectures Part 2 TMA4280 Introduction to Supercomputing

Unit 11: Putting it All Together: Anatomy of the XBox 360 Game Console

Antonio R. Miele Marco D. Santambrogio

GPU Cluster Computing. Advanced Computing Center for Research and Education

INF5063: Programming heterogeneous multi-core processors Introduction

ECE 8823: GPU Architectures. Objectives

Directed Optimization On Stencil-based Computational Fluid Dynamics Application(s)

World s most advanced data center accelerator for PCIe-based servers

Outline Marquette University

Administrivia. HW0 scores, HW1 peer-review assignments out. If you re having Cython trouble with HW2, let us know.

CS8803SC Software and Hardware Cooperative Computing GPGPU. Prof. Hyesoon Kim School of Computer Science Georgia Institute of Technology

Master Informatics Eng.

CUDA PROGRAMMING MODEL Chaithanya Gadiyam Swapnil S Jadhav

Tutorial. Preparing for Stampede: Programming Heterogeneous Many-Core Supercomputers

IMPROVING ENERGY EFFICIENCY THROUGH PARALLELIZATION AND VECTORIZATION ON INTEL R CORE TM

Lecture 1: Introduction and Computational Thinking

ECE 571 Advanced Microprocessor-Based Design Lecture 22

Energy Efficient K-Means Clustering for an Intel Hybrid Multi-Chip Package

GPU Programming. Lecture 1: Introduction. Miaoqing Huang University of Arkansas 1 / 27

Lecture 27: Multiprocessors. Today s topics: Shared memory vs message-passing Simultaneous multi-threading (SMT) GPUs

Accelerator cards are typically PCIx cards that supplement a host processor, which they require to operate Today, the most common accelerators include

Energy Efficient Transparent Library Accelera4on with CAPI Heiner Giefers IBM Research Zurich

All About the Cell Processor

CS4230 Parallel Programming. Lecture 3: Introduction to Parallel Architectures 8/28/12. Homework 1: Parallel Programming Basics

Steve Scott, Tesla CTO SC 11 November 15, 2011

Portland State University ECE 588/688. Graphics Processors

GRAPHICS PROCESSING UNITS

COSC 6385 Computer Architecture - Thread Level Parallelism (I)

Computer Architecture A Quantitative Approach, Fifth Edition. Chapter 1. Copyright 2012, Elsevier Inc. All rights reserved. Computer Technology

Supercomputing with Commodity CPUs: Are Mobile SoCs Ready for HPC?

Challenges for GPU Architecture. Michael Doggett Graphics Architecture Group April 2, 2008

Transcription:

Uppsala Programming for Multicore Architectures Research Center GPUs: The Hype, The Reality, and The Future David Black- Schaffer Assistant Professor, Department of Informa<on Technology Uppsala University

Uppsala University / Department of Informa<on Technology 25/11/2011 2 Today 1. The hype 2. What makes a GPU a GPU? 3. Why are GPUs scaling so well? 4. What are the problems? 5. What s the Future?

Uppsala University / Department of Informa<on Technology 25/11/2011 3 THE HYPE

Uppsala University / Department of Informa<on Technology 25/11/2011 4 How Good are GPUs? 100x 3x

Uppsala University / Department of Informa<on Technology 25/11/2011 5 Real World SoVware Press release 10 Nov 2011: NVIDIA today announced that four leading applica<ons have added support for mul<ple GPU accelera<on, enabling them to cut simula/on /mes from days to hours. GROMACS 2-3x overall Implicit solvers 10x, PME simula<ons 1x LAMPS 2-8x for double precision Up to 15x for mixed QMCPACK 3x 2x is AWESOME! Most research claims 5-10%.

Uppsala University / Department of Informa<on Technology 25/11/2011 6 GPUs for Linear Algebra 5 CPUs = 75% of 1 GPU StarPU for MAGMA

Uppsala University / Department of Informa<on Technology 25/11/2011 7 GPUs by the Numbers (Peak and TDP) 5 791 Normalized to Intel 3960X 4 3 2 1 130 244 250 172 675 51 192 176 3.3 Intel 32nm vs. 40nm 36% smaller per transistor 2.3 3.0 2.4 1.5 0.9 0 WaEs GFLOP Bandwidth GHz Transistors Intel 3960X Nvidia GTX 580 AMD 6970

Uppsala University / Department of Informa<on Technology 25/11/2011 8 Efficiency 4 But how close to peak do you really get? (DP, peak) GPUs are enormously more efficient designs for doing FLOPs 400 GFLOP/W 3 2 1 300 200 100 FLOP/cycle/B Transistors 0 Intel 3960X GTX 580 AMD 6970 Energy Efficiency Design Efficiency 0

Uppsala University / Department of Informa<on Technology 25/11/2011 9 Efficiency in Perspec<ve Nvidia 4 says that ARM + Laptop GPU 7.5GFLOPS/W 3 Energy Efficiency LINPACK: Green 500 #1 Blue Gene/Q LINPACK: Green 500 #6 GPU/CPU GFLOP/W 2 1 Peak Peak Peak Actual Actual 0 Intel 3960X GTX 580 AMD 6970 Blue Gene/Q DEGIMA Cluster

Uppsala University / Department of Informa<on Technology 25/11/2011 11 Intel s Response Larabee Manycore x86- light to compete with Nvidia/AMD in graphics and compute Didn t work out so well (despite huge fab advantages graphics is hard) Reposi/oned it as an HPC co- processor Knight s Corner 1TF double precision in a huge (expensive) single 22nm chip At 300W this would beat Nvidia s peak efficiency today (40nm)

Uppsala University / Department of Informa<on Technology 25/11/2011 12 Show Me the Money 4 9.4 Revenue in Billion USD 3 2 1 0 50% of this is GPU in HPC AMD Q3 Nvidia Q3 Intel Q3 Professional Graphics Embedded Processors

Uppsala University / Department of Informa<on Technology 25/11/2011 13 WHAT MAKES A GPU A GPU?

Uppsala University / Department of Informa<on Technology 25/11/2011 14 GPU Characteris<cs Architecture Data parallel processing Hardware thread scheduling High memory bandwidth Graphics rasteriza<on units Limited caches (with texture filtering hardware) Programming/Interface Data parallel kernels Throughput- focused Limited synchroniza<on Limited OS interac<on

Uppsala University / Department of Informa<on Technology 25/11/2011 15 GPU Innova<ons SMT (for latency hiding) Massive numbers of threads Programming SMT is far easier than SIMD SIMT (thread groups) Amor<ze scheduling/control/data access Warps, wavefronts, work- groups, gangs Memory systems op/mized for graphics Special storage formats Texture filtering in the memory system Bank op<miza<ons across threads Limited synchroniza/on Improves hardware scalability (They didn t invent any of these, but they made them successful.)

Uppsala University / Department of Informa<on Technology 25/11/2011 16 WHY ARE GPUS SCALING SO WELL?

Uppsala University / Department of Informa<on Technology 25/11/2011 17 Lots of Room to Grow the Hardware Simple cores Short pipelines No branch predic<on In- order Simple memory system Limited caches* No coherence Split address space* 1600 Simple control Limited context switching Limited processes (But do have to schedule 1000s of threads ) Lack of OS- level issues Memory alloca<on Preemp<on 200 1200 But, they do have a lot TLBs for VM I- caches Complex hardware thread schedulers Trading off simplicity for features and programmability. 400 GFLOPs (SP) 1200 800 400 150 100 50 Bandwidth (GB/s) FLOP/cycle 900 600 300 300 200 100 FLOP/cycle/B Transistors 0 G80 GT200 GF100 0 0 G80 GT200 GF100 0 GFLOP (SP) Bandwidth Architecture Efficiency Design Efficiency

Uppsala University / Department of Informa<on Technology 25/11/2011 18 Nice Programming Model All GPU programs have: Explicit parallelism Hierarchical structure Restricted synchroniza<on Data locality Inherent in graphics Enforced in compute by performance Latency tolerance void kernel calcsin(global float *data) { int id = get_global_id(0); data[id] = sin(data[id]); } Expensive Synchronization OK.! No Synchronization.! Easy to scale! Cheap

Uppsala University / Department of Informa<on Technology 25/11/2011 19 Why are GPUs Scaling So Well? Room in the hardware design Scalable sovware They re not burdened with 30 years of cru6 and legacy code lucky them.

Uppsala University / Department of Informa<on Technology 25/11/2011 20 WHERE ARE THE PROBLEMS?

Uppsala University / Department of Informa<on Technology 25/11/2011 21 Amdahl s Law Always have serial code GPU single threaded performance is terrible Solu/on: Heterogeneity A few fat latency- op/mized cores Many thin throughput- op/mized cores Plus hard- coded accelerators Nvidia Project Denver ARM for latency GPU for throughput AMD Fusion x86 for latency GPU for throughput Intel MIC x86 for latency X86- light for throughput Maximum Speedup 100 10 1 1 2 4 8 16 32 64 128 256 512 1024 Number of Cores Limits of Amdahl s Law 99% Parallel (91x) 90% Parallel (10x) 75% Parallel (4x)

Uppsala University / Department of Informa<on Technology 25/11/2011 22 It s an Accelerator Moving data to the GPU is slow Moving data from the GPU is slow Moving data to/from the GPU is really slow. Limited data storage Limited interac<on with OS GRAM 1.5GB 192GB/s GPU Nvidia GTX 580 529mm 2 5 GB/s PCIe 3.0 10GB/s CPU Intel 3960X 435mm 2 51GB/s DRAM 16GB

Uppsala University / Department of Informa<on Technology 25/11/2011 23 Legacy Code Code lives forever Amdahl: Even op<mizing the 90% of hot code limits speedup to 10x Many won t invest in proprietary technology Programming models are immature CUDA mature low- level Nvidia, PGI OpenCL immature low- level Nvidia, AMD, Intel, ARM, Altera, Apple OpenHMPP mature high- level CAPS OpenACC immature high- level CAPS, Nvidia, Cray, PGI

Uppsala University / Department of Informa<on Technology 25/11/2011 24 Code that Doesn t Look Like Graphics If it s not painfully data- parallel you have to redesign your algorithm Example: scan- based techniques for zero coun<ng in JPEG Why? It s only 64 entries! Single- threaded performance is terrible. Need to parallelize. Overhead of transferring data to CPU is too high. If it s not accessing memory well you have to re- order your algorithm Example: DCT in JPEG Need to make sure your access to local memory has no bank conflicts across threads. Libraries star/ng to help Lack of composability Example: Combining linear algebra opera<ons to keep data on the device Most code is not purely data- parallel Very expensive to synchronize with the CPU (data transfer) No effec<ve support for task- based parallelism No ability to launch kernels from within kernels

Uppsala University / Department of Informa<on Technology 25/11/2011 25 Corollary: What are GPUs Good For? Data parallel code Lots of threads Easy to express parallelism (no SIMD nas<ness) High arithme/c intensity and simple control flow Lots of FPUs No branch predictors High reuse of limited data sets Very high bandwidth to memory (1-6GB) Extremely high bandwidth to local memory (16-64kB) Code with predictable access paeerns Small (or no) caches User- controlled local memories

Uppsala University / Department of Informa<on Technology 25/11/2011 26 WHAT S THE FUTURE OF GPUS?

In the big.little task migration use model the OS and applications only ever execute on Cortex-A15 or Cortex-A7 and never both processors at the same time. This use-model is a natural extension to the Dynamic Voltage and Frequency Scaling (DVFS), operating points provided by current mobile platforms with a single application processor to allow the OS to match the performance of the platform 25/11/2011 2to 7 the Uppsala University / Department of Informa<on Technology performance required by the application. David Black- Schaffer! However, in a Cortex-A15-Cortex-A7 platform these operating points are applied both to Cortex-A15 and Cortex-A7. When Cortex-A7 is executing the OS can tune the operating points as it would for an existing platform with a single applications processor. Once Cortex-A7 is at its highest operating point if more Introduction performance is required a task migration can be invoked!that picks up the OS and applications and The range of performance being demanded from modern, high-performance, mobile platforms is moves them to Cortex-A15. Heterogeneity for Efficiency unprecedented. Users require platforms to be accomplished at high processing intensity tasks such as gaming and web browsing while providing long battery life for low processing intensity tasks such as texting,applications e-mail and audio. to be executed on Cortex-A7 with better energy This allows low and medium intensity efficiency than Cortex-A15 can achieve the high applications characterize today s Cortex -A15 that processor is paired with a LITTLE In the first while big.little system fromintensity ARM a big ARM Queue Issue Cortex -A7 processor to Writeback create a system that can accomplish both high intensity and low intensity tasks smartphones can execute on Cortex-A15. in the most energy efficient manner. By coherently connecting the Cortex-A15 and Cortex-A7 processors Integer Fetch No way around it in sight Specialize to get beeer efficiency Integer via the CCI-400 coherent interconnect the system is flexible enough to support a variety of big.little use models, which can be tailored to the processing requirements of the tasks. Multiply Decode, Rename & Dispatch Floating-Point / NEON The Processors Branch Loop Cache Figure 2 Cortex-A15 Pipeline Load of big.little is that the processors are architecturally identical. Both Cortex-A15 and The central tenet Store Cortex-A7 implement the full ARM v7a architecture including Virtualization and Large Physical Address Extensions. Accordingly all instructions will execute in an architecturally consistent way on both CortexA15 and Cortex-A7, albeit with different performances. The implementation defined of feature set of Cortex-A15 and Cortex-A7 is also similar. Both processors can Since the energy consumed by the execution of an instruction is partially related to the number pipeline be configured have between one and four cores and both integrate a level-2 cache inside the stages it must traverse, a significant difference in energy between Cortex-A15 andtocortex-a7 comes from processing cluster. Additionally, each processor implements a single AMBA 4 coherent interface that the different pipeline lengths. can be connected to a coherent interconnect such as CCI-400. In general, there is a different ethos taken in the Cortex-A15 micro-architecture than with the Cortex-A7 It is in the micro-architectures that the differences between Cortex-A15 and Cortex-A7 become clear. micro-architecture. When appropriate, Cortex-A15 trades off energy efficiency for performance, while While Cortex-A7 1) is an in-order, non-symmetric dual-issue processor with a pipeline length of Cortex-A7 will trade off performance for energy efficiency. A good example of these(figure micro-architectural between 8-stages and 10-stages, trade-offs is in the level-2 cache design. While a more area optimized approach would have been to Cortex-A15 (Figure 2) is an out-of-order sustained triple-issue processor with a pipeline length from of between 15-stages and 24-stages. share a single level-2 cache between Cortex-A15 and Cortex-A7 this part of the design can benefit optimizations in favor of energy efficiency or performance. As such Cortex-A15 and Cortex-A7 have Writeback integrated level-2 caches. Integer This is why GPUs are more efficient today Heterogeneous mixes Throughput- oriented thin cores Latency- focused fat cores Fixed- func/on accelerators Video, audio, network, etc. Already in OpenCL 1.2 Table 1 illustrates the difference in performance and energy between Cortex-A15 and Cortex-A7 across a variety of benchmarks and micro-benchmarks. The first column describes the uplift in performance from Cortex-A7 to Cortex-A15, while the second column considers both the performance and power difference Lowest Decode Issue Cortexto show the improvement in energy efficiency from Cortex-A15 to Cortex-A7. All measurements are on A15 complete, frequency optimized layouts of Cortex-A15 and Cortex-A7 using the same cell and RAM Operati libraries. All code that is executed on Cortex-A7 is compiled for Cortex-A15. ng PointFe tch Dhrystone FDCT IMDCT MemCopy L1 MemCopy L2 Cortex-A15 vs Cortex-A7 Performance 1.9x 2.3x 3.0x 1.9x 1.9x Cortex-A7 vs Cortex-A15 Energy Efficiency 3.5x 3.8x 3.0x 2.3x 3.4x Multiply Floating-Point / NEON Dual Issue Load/Store Queue Figure 1 Cortex-A7 Pipeline ARM big.little Figure 4 Cortex-A15-Cortex-A7 DVFS Curves Table 1 Cortex-A15 & Cortex-A7 Performance & Energy Comparison 2011 ARM Limited. All rights reserved. An important consideration of a big.little system iscopyright the time it takes to migrate a task between the Cortex-A15 cluster and the Cortex-A7 cluster. If it takes too long then it may become noticeable to the operating system and the system power may outweigh the benefit of task migration for some time. Therefore, the Cortex-A15-Cortex-A7 system is designed to migrate in less than 20,000-cycles, or 20microSeconds with processors operating at 1GHz. Copyright 2011 ARM Limited. All rights reserved. It should be observed from Table 1 that although Cortex-A7 is labeled the LITTLE processor its The ARM logo is a registered trademark of ARM Ltd. All other trademarks are the property of their respective owners and are acknowledged performance potential is considerable. In fact, due to micro-architecture advances Cortex-A7 provides Page 2 of 8 higher performance than current Cortex-A8 based implementations for a fraction of the power. As such a significant amount of processing can remain on Cortex-A7 without resorting to Cortex-A15. The ARM logo is a registered trademark of ARM Ltd. All other trademarks are the property of their respective owners and are acknowledged Page 3 of 8 Copyright 2011 ARM Limited. All rights reserved. The ARM logo is a registered trademark of ARM Ltd. All other trademarks are the property of their respective owners and are acknowledged Page 5 of 8 Dark silicon OS/run<me/app will have to adapt Energy will be a shared resource Nvidia Tegra 3: 4 fast cores + 1 slow core

Uppsala University / Department of Informa<on Technology 25/11/2011 28 The Future is Not in Accelerators Memory Unified memory address space Low performance coherency High performance scratchpads OS interac/on between all cores Nvidia Project Denver ARM for latency GPU for throughput AMD Fusion x86 for latency GPU for throughput Intel MIC x86 for latency X86- light for throughput Shared I/O GPU Cores CPU CPU CPU CPU AMD Fusion

Uppsala University / Department of Informa<on Technology 25/11/2011 29... Focus on Data Locality Floating-point performance (Gflops) 1,600 1,400 1,200 1,000 800 600 400 200 Single- precision performance Double- precision performance Memory bandwidth Single-precision performance Double-precision performance Memory bandwidth Requires 2x bandwidth and storage 1,600 1,400 1,200 1,000 800 600 400 200 Memory bandwidth (Gbytes/sec.) 0 0 2000 2002 2004 2006 2008 2010 2012 S. Keckler, et. al. GPUs and the Future of Parallel Compu<ng. IEEE Micro, Sept/Oct 2011. Figure 1. GPU processor and memory-system performance trends. Initially, memory bandwidth nearly doubled every two years, but this trend has slowed over the past few years. On-die GPU performance has continued to grow at about 45 percent per year for

Uppsala University / Department of Informa<on Technology 25/11/2011 30 Focus on Data Locality Not just on- chip/off- chip but within a chip Sooware controllable memories Configure for cache/scratch pad Enable/disable coherency Programmable DMA/prefetch engines Program must expose data movement/locality Explicit informa<on to the run<me/compiler Auto- tuning, data- flow, op<miza<on But we will have global coherency to get code correct (See the Micro paper GPUs and the Future of Parallel Compu<ng from Nvidia about their Echelon project and design.)

Uppsala University / Department of Informa<on Technology 25/11/2011 31 Graphics will (s<ll) be a Priority It s where the money is Fixed- func<on graphics units Memory- system hardware for texture interpola<on will live forever Half- precision floa<ng point will live forever (And others else might actually use it. Hint hint.)

Uppsala University / Department of Informa<on Technology 25/11/2011 32 CONCLUSIONS

Uppsala University / Department of Informa<on Technology 25/11/2011 33 Breaking Through The Hype Real efficiency advantage Intel is pushing hard to minimize it Much larger for single precision Real performance advantage About 2-5x But you have to re- write your code The market for GPUs is /ny compared to CPUs Everyone believes that specializa/on is necessary to tackle energy efficiency

Uppsala University / Department of Informa<on Technology 25/11/2011 34 The Good, The Bad, and The Future The Good: Limited domain allows more efficient implementa<on Good choice of domain allows good scaling The Bad: Limited domain focus makes some algorithms hard They are not x86/linux (legacy code) The Future: Throughput- cores + latency- cores + fixed accelerators Code that runs well on GPUs today will port well We may share hardware with the graphics subsystem, but we won t program GPUs Uppsala Programming for Multicore Architectures Research Center