Scale-Out Acceleration for Machine Learning

Size: px
Start display at page:

Download "Scale-Out Acceleration for Machine Learning"

Transcription

1 Scale-Out Acceleration for Machine Learning Jongse Park Hardik Sharma Divya Mahajan Joon Kyung Kim Preston Olds Hadi Esmaeilzadeh Alternative Computing Technologies (ACT) Lab Georgia Institute of Technology University of California, San Diego MICRO50

2 Data grows at an unprecedented rate 1 minute over the internet 900,000 Logins 452,000 Tweets Sent 40,000 Hours Listened 156 Million s Sent 1.8 Millions Snaps Created 46,200 Posts Uploaded $751,522 Spent Online 70,107 Hours Watched 3.5 Million Search Queries 342,000 Apps Downloaded 1

3 Machines learn to extract insights from data Training Phase Trained Model Inference Phase Training Data ML Algorithm Predictions Unseen Data 2

4 Machines learn to extract insights from data Training Phase Trained Model Inference Phase Training Data ML Algorithm Predictions Unseen Data 3

5 Two disjoint solutions for ML training Distributed Computing Single-Node FPGA/ASIC Accelerators SBCCI 03 FCCM 09 FCCM 10 AHS 11 CVPRW 11 DAC 12 ASPLOS 14 ASPLOS 15 HPCA 15 HPCA 16 MICRO 16 4

6 Two disjoint solutions for ML training Distributed Computing Single-Node FPGA/ASIC Accelerators CoSMIC SBCCI 03 FCCM 09 FCCM 10 AHS 11 CVPRW 11 DAC 12 ASPLOS 14 ASPLOS 15 HPCA 15 HPCA 16 MICRO 16 5

7 Challenges 1 How to distribute 2 How to design 3 ML training? customizable accelerators? How to reduce the overhead of distributed coordination? 6

8 Challenges 1 How to distribute 2 How to design 3 ML training? customizable accelerators? How to reduce the overhead of distributed coordination? We need a full stack 7

9 CoSMIC Computing Stack for ML Acceleration In the Cloud Programming layer Compilation layer System layer Architecture layer Circuit layer High-level mathematical language Accelerator operation scheduling and data mapping System software orchestrating distributed accelerators Multi-threaded template architecture Constructing RTL Verilog 8

10 CoSMIC workflow Multi-threaded Template Architecture Thread ID Mem PE Addr Offset Pipelined Memory Bus Tree Bus Model Specification Translator Dataflow Graph source mu x[0] w[0] w[1] x[1] 1 0 y x[2] w[2] * * - * + + * > * * * * Planner Thread Index Table Memory Schedule Memory Interface Shi$er Prefetch Buffer PE ₁,n PE ₁,₂ Shared Row Bus PE ₂,n PE ₂,₂ PE ₁,₁ PE ₂,₁ Worker Thread #1 Accelerator Architecture PE m-₁,n PE m,n PE m-₁,₂ PE m,₂ PE m-₁,₁ PE m,₁ Worker Thread #T * * * sink Compiler Operation Schedule/Map Constructor V RTL Verilog for Accelerator Circuit Constraints System Specification System Director System Software 9

11 Challenges 1 How to distribute 2 How to design 3 ML training? customizable accelerators? How to reduce the overhead of distributed coordination? We need a full stack 10

12 1 How to distribute? Understanding machine learning Training Data Input (X) Output (Y*) 25,1,76,0 1 6,43,9,93 6 : : 23,56,2, ,0,9,0 0 w i Intermediate Model Output (Y*) Predicted Output (Y) Loss (w ' ) Loss w i = + Y Y '

13 Algorithmic commonalities Training Data Input (X) 25,1,76,0 1 6,43,9,93 6 : : 23,56,2, ,0,9,0 0 Learning is solving an iterative optimization problem! Output (Y*) w i Intermediate Model Output (Y*) Predicted Output (Y) Loss (w ' ) Trained Model find w i Loss w i = Y Y ' is minimized

14 Stochastic gradient descent solver Initial point Loss function (w ' ) = f w ; (<), w> (<),...., w@ (<) w ' (<A;) (<) (<) (<) f w (<) ;, w>,...., w@ = w ' u (<) w ' W1 W2 Optimal point

15 Leverage linearity of differentiation for distributed learning Delta Node 1 Delta Node i W (t+1) 1 = f W (t) Data Partition 1 Accelerator Data Partition i Accelerator W (t+1) i = f W (t) Delta Node i+1 Sigma Node W (t+1) i+1 = f W (t) Data Partition i+1 Accelerator Data Partition M Accelerator W (t+1) M = W (t+1) = f W (t) Pi W (t+1) i M

16 Abstraction between algorithm and scale-out acceleration system Support Vector Machines Gradient Loss Function Regression Analysis Gradient Loss Function Machine Learning Algorithms Back Propagation Gradient Loss Function Collaborative Filtering Gradient Loss Function Kalman Filters Gradient Loss Function Parallelized Stochastic Gradient Descent Solver (Abstraction between Hardware and Software) Xeon Xeon Phi Phi GPU FPGA CGRA ASIC Xeon Phi GPU GPU FPGA FPGA CGRA CGRA ASIC ASIC 15

17 CoSMIC programming Math formulations CoSMIC programming loss w ' = (+ w ' x ' ' y)x ' h = sum[i](w[i] * x[i]); d = h y g[i] = d * x[i] w (<A;) = (<A;) E w E n aggregator(n) { w[i] = (sum[j](w[i])) / n; } 16

18 CoSMIC compilation CoSMIC programming CoSMIC compilation h = sum[i](w[i] * x[i]); d = h y g[i] = d * x[i] Dataflow graph aggregator(n) { w[i] = (sum[j](w[i])) / n; } Software aggregator 17

19 Challenges 1 How to distribute 2 How to design 3 ML training? customizable accelerators? How to reduce the overhead of distributed coordination? We need a full stack 18

20 2 How to design customizable accelerator? Template architecture N columns Chip Memory Interface PE 1,1 PE 1,2 PE 1,N PE 2,1 PE 2,2 PE 2,N M rows PE m-1,1 PE m-1,2 PE m-1,n PE m,1 PE m,2 PE m,n 19

21 Multi-threading acceleration Memory Interface Worker Thread #1 PE 1,1 PE 1,2 PE 1,N PE 2,1 PE 2,2 PE 2,N Worker Thread #T PE m-1,1 PE m-1,2 PE m-1,n PE m,1 PE m,2 PE m,n 20

22 Connectivity and bussing Pipelined Memory Bus Worker Thread #1 Memory Interface PE 1,1 PE 1,2 PE 1,N PE 2,1 PE 2,2 PE 2,N Shared Row Bus Tree Bus Refer the paper Bi-directional Link Worker Thread #T PE m-1,1 PE m-1,2 PE m-1,n PE m,1 PE m,2 PE m,n 21

23 PE architecture Stage 1 Stage 2 Stage 3 Stage 4 Stage 5 Shared Bus Le3 PE Right PE Data Buffer Model Buffer Interim Buffer Non- Linear > ALU + x 22

24 Challenges 1 How to distribute 2 How to design 3 ML training? customizable accelerators? How to reduce the overhead of distributed coordination? We need a full stack 23

25 3 How to reduce overhead of distributed coordination? Specialized system software in CoSMIC Delta Node Software Internally Managed Thread Pools for Networking and Aggregation Sigma Node Software Network handler Circular Buffers Aggregator Networking Thread Pool Aggregation Thread Pool Delta Node Software Block by block Produced-Consumer Semantics 24

26 Benchmarks for distributed learning Name Description Algorithm Model Size (KB) Training Data Size (GB) Lines of Code mnist acoustic Handwritten digit recognition Speech recognition modeling Backpropagation 2,432 KB 1,527 KB 2.9 GB 5.6 GB stock texture Stock price prediction Image texture recognition Linear Regression 31 KB 64 KB 14.7 GB 17.9 GB tumor cancer1 Tumor classification Prostate cancer diagnosis Logistic Regression 8 KB 24 KB 10.4 GB 13.5 GB movielens netflix Movielens recommender system Netflix recommender system Collaborative Filtering 1,176 KB 2,854 KB 0.6 GB 2.0 GB face cancer2 Human face detection Cancer diagnosis Support Vector Machine 7 KB 28 KB 15.9 GB 20.0 GB

27 List of results 1. Performance comparison with Spark 2. Breakdown of performance between computation and system coordination 3. Performance comparison of FPGA-CoSMIC with Programmable ASIC and GPUs 4. Power Efficiency comparison between FPGA, Programmable ASICs and GPUs 5. Sensitivity to mini-batch size, compute units, memory bandwidth 6. Design space exploration for multithreading 7. Comparison of template architecture with prior accelerator designs 26

28 List of results 1. Performance comparison with Spark 2. Breakdown of performance between computation and system coordination 3. Performance comparison of FPGA-CoSMIC with Programmable ASIC and GPUs 4. Power Efficiency comparison between FPGA, Programmable ASICs and GPUs 5. Sensitivity to mini-batch size, compute units, memory bandwidth 6. Design space exploration for multithreading 7. Comparison of template architecture with prior accelerator designs 27

29 Speedup in comparison with Spark Speedup over 4-CPU-Spark (log-scale) mnist acoustic stock texturetumor CoSMIC 16-node CoSMIC with UltraScale+ FPGAs offer 18.8 speedup over 16-node Spark with Xeon E3 Skylake CPUs Scaling from 4 to 16 nodes with CoSMIC yields 2.7 improvement while Spark offers 1.8. Spark cancer1 movielens netflix face cancer2 geomean 28

30 Speedup breakdown between computation and system coordination Speedup / Spark (Log Scale) 1, mnist acoustic mnist acoustic stock stock texture texture tumor tumor cancer1 FPGA movielens cancer1 movielens System Software netflix face cancer2 geomean netflix face cancer2 geomean CoSMIC system software makes system coordination (34%) 28.4 faster CoSMIC FPGA hardware makes computation (66%) 20.7 faster The overall speedup is 22.8 speedup 29

31 Hardware is not enough, we need a novel system stack Speedup over FPGA-CoSMIC mnist acoustic stocktexture P-ASIC-CoSMIC GPU-CoSMIC tumor cancer1 movielens netflix face cancer2 geomean P-ASIC, and GPU provide 2.3 and 1.5 extra speedup over FPGA CoSMIC, which is 22.8 faster than Spark 30

32 Performance-per-Watt comparison Improvement in Performance-per-Watt mnist acoustic stocktexture FPGA-CoSMIC P-ASIC-CoSMIC tumor cancer1 movielens netflix face cancer2 geomean With CoSMIC FPGAs and P-ASICs provide 4.2 and 8.2 higher Performance-per-Watt than GPUs 31

33 Conclusion CoSMIC Full-stack solution with an algorithmic angle What is next Reduce communication Specialize the networking stack Design more template architecture 32

TABLA: A Unified Template-based Framework for

TABLA: A Unified Template-based Framework for Appears in the Proceedings of the 22 nd Annual IEEE International Symposium on High Performance Computer Architecture, 2016 TABLA: A Unified Template-based Framework for Accelerating Statistical Machine

More information

PRIME: A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory

PRIME: A Novel Processing-in-memory Architecture for Neural Network Computation in ReRAM-based Main Memory Scalable and Energy-Efficient Architecture Lab (SEAL) PRIME: A Novel Processing-in-memory Architecture for Neural Network Computation in -based Main Memory Ping Chi *, Shuangchen Li *, Tao Zhang, Cong

More information

Maximizing Server Efficiency from μarch to ML accelerators. Michael Ferdman

Maximizing Server Efficiency from μarch to ML accelerators. Michael Ferdman Maximizing Server Efficiency from μarch to ML accelerators Michael Ferdman Maximizing Server Efficiency from μarch to ML accelerators Michael Ferdman Maximizing Server Efficiency with ML accelerators Michael

More information

SDA: Software-Defined Accelerator for Large- Scale DNN Systems

SDA: Software-Defined Accelerator for Large- Scale DNN Systems SDA: Software-Defined Accelerator for Large- Scale DNN Systems Jian Ouyang, 1 Shiding Lin, 1 Wei Qi, Yong Wang, Bo Yu, Song Jiang, 2 1 Baidu, Inc. 2 Wayne State University Introduction of Baidu A dominant

More information

SDA: Software-Defined Accelerator for Large- Scale DNN Systems

SDA: Software-Defined Accelerator for Large- Scale DNN Systems SDA: Software-Defined Accelerator for Large- Scale DNN Systems Jian Ouyang, 1 Shiding Lin, 1 Wei Qi, 1 Yong Wang, 1 Bo Yu, 1 Song Jiang, 2 1 Baidu, Inc. 2 Wayne State University Introduction of Baidu A

More information

Harp-DAAL for High Performance Big Data Computing

Harp-DAAL for High Performance Big Data Computing Harp-DAAL for High Performance Big Data Computing Large-scale data analytics is revolutionizing many business and scientific domains. Easy-touse scalable parallel techniques are necessary to process big

More information

Machine Learning at the Limit

Machine Learning at the Limit Machine Learning at the Limit John Canny*^ * Computer Science Division University of California, Berkeley ^ Yahoo Research Labs @GTC, March, 2015 My Other Job(s) Yahoo [Chen, Pavlov, Canny, KDD 2009]*

More information

From High-Level Deep Neural Models to FPGAs

From High-Level Deep Neural Models to FPGAs Appears in the Proceedings of the 49 th Annual IEEE/ACM International Symposium on Microarchitecture, 2016 From High-Level Deep Neural Models to FPGAs Hardik Sharma Jongse Park Divya Mahajan Emmanuel Amaro

More information

Linear Regression Optimization

Linear Regression Optimization Gradient Descent Linear Regression Optimization Goal: Find w that minimizes f(w) f(w) = Xw y 2 2 Closed form solution exists Gradient Descent is iterative (Intuition: go downhill!) n w * w Scalar objective:

More information

ImageNet Classification with Deep Convolutional Neural Networks

ImageNet Classification with Deep Convolutional Neural Networks ImageNet Classification with Deep Convolutional Neural Networks Alex Krizhevsky Ilya Sutskever Geoffrey Hinton University of Toronto Canada Paper with same name to appear in NIPS 2012 Main idea Architecture

More information

Parallel Stochastic Gradient Descent: The case for native GPU-side GPI

Parallel Stochastic Gradient Descent: The case for native GPU-side GPI Parallel Stochastic Gradient Descent: The case for native GPU-side GPI J. Keuper Competence Center High Performance Computing Fraunhofer ITWM, Kaiserslautern, Germany Mark Silberstein Accelerated Computer

More information

ECS289: Scalable Machine Learning

ECS289: Scalable Machine Learning ECS289: Scalable Machine Learning Cho-Jui Hsieh UC Davis Sept 22, 2016 Course Information Website: http://www.stat.ucdavis.edu/~chohsieh/teaching/ ECS289G_Fall2016/main.html My office: Mathematical Sciences

More information

Machine Learning In A Snap. Thomas Parnell Research Staff Member IBM Research - Zurich

Machine Learning In A Snap. Thomas Parnell Research Staff Member IBM Research - Zurich Machine Learning In A Snap Thomas Parnell Research Staff Member IBM Research - Zurich What are GLMs? Ridge Regression Support Vector Machines Regression Generalized Linear Models Classification Lasso Regression

More information

Bridging Analog Neuromorphic and Digital von Neumann Computing

Bridging Analog Neuromorphic and Digital von Neumann Computing Bridging Analog Neuromorphic and Digital von Neumann Computing Amir Yazdanbakhsh, Bradley Thwaites Advisors: Hadi Esmaeilzadeh and Doug Burger Qualcomm Mentors: Manu Rastogiand Girish Varatkar Alternative

More information

arxiv: v1 [cs.db] 8 Jan 2018

arxiv: v1 [cs.db] 8 Jan 2018 In-RDBMS Hardware Acceleration of Advanced Analytics Divya Mahajan Joon Kyung Kim Jacob Sacks Adel Ardalan Arun Kumar Hadi Esmaeilzadeh Georgia Institute of Technology University of Wisconsin-Madison University

More information

Big Data Systems on Future Hardware. Bingsheng He NUS Computing

Big Data Systems on Future Hardware. Bingsheng He NUS Computing Big Data Systems on Future Hardware Bingsheng He NUS Computing http://www.comp.nus.edu.sg/~hebs/ 1 Outline Challenges for Big Data Systems Why Hardware Matters? Open Challenges Summary 2 3 ANYs in Big

More information

HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES. Cliff Woolley, NVIDIA

HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES. Cliff Woolley, NVIDIA HARNESSING IRREGULAR PARALLELISM: A CASE STUDY ON UNSTRUCTURED MESHES Cliff Woolley, NVIDIA PREFACE This talk presents a case study of extracting parallelism in the UMT2013 benchmark for 3D unstructured-mesh

More information

DAnA. In-RDBMS Hardware Acceleration of Advanced Analytics. Divya Mahajan Joon Kyung Kim Jacob Sacks Adel Ardalan Arun Kumar Hadi Esmaeilzadeh

DAnA. In-RDBMS Hardware Acceleration of Advanced Analytics. Divya Mahajan Joon Kyung Kim Jacob Sacks Adel Ardalan Arun Kumar Hadi Esmaeilzadeh In-RDBMS Hardware Acceleration of Advanced Analytics Divya Mahajan Joon Kyung Kim Jacob Sacks Adel Ardalan Arun Kumar Hadi Esmaeilzadeh Georgia Institute of Technology University of Wisconsin-Madison University

More information

Parallel Methods for Convex Optimization. A. Devarakonda, J. Demmel, K. Fountoulakis, M. Mahoney

Parallel Methods for Convex Optimization. A. Devarakonda, J. Demmel, K. Fountoulakis, M. Mahoney Parallel Methods for Convex Optimization A. Devarakonda, J. Demmel, K. Fountoulakis, M. Mahoney Problems minimize g(x)+f(x; A, b) Sparse regression g(x) =kxk 1 f(x) =kax bk 2 2 mx Sparse SVM g(x) =kxk

More information

XPU A Programmable FPGA Accelerator for Diverse Workloads

XPU A Programmable FPGA Accelerator for Diverse Workloads XPU A Programmable FPGA Accelerator for Diverse Workloads Jian Ouyang, 1 (ouyangjian@baidu.com) Ephrem Wu, 2 Jing Wang, 1 Yupeng Li, 1 Hanlin Xie 1 1 Baidu, Inc. 2 Xilinx Outlines Background - FPGA for

More information

Training Deep Neural Networks (in parallel)

Training Deep Neural Networks (in parallel) Lecture 9: Training Deep Neural Networks (in parallel) Visual Computing Systems How would you describe this professor? Easy? Mean? Boring? Nerdy? Professor classification task Classifies professors as

More information

Performance Characterization, Prediction, and Optimization for Heterogeneous Systems with Multi-Level Memory Interference

Performance Characterization, Prediction, and Optimization for Heterogeneous Systems with Multi-Level Memory Interference The 2017 IEEE International Symposium on Workload Characterization Performance Characterization, Prediction, and Optimization for Heterogeneous Systems with Multi-Level Memory Interference Shin-Ying Lee

More information

Asynchronous Stochastic Gradient Descent on GPU: Is It Really Better than CPU?

Asynchronous Stochastic Gradient Descent on GPU: Is It Really Better than CPU? Asynchronous Stochastic Gradient Descent on GPU: Is It Really Better than CPU? Florin Rusu Yujing Ma, Martin Torres (Ph.D. students) University of California Merced Machine Learning (ML) Boom Two SIGMOD

More information

Characterizing and Benchmarking Deep Learning Systems on Modern Data Center Architectures

Characterizing and Benchmarking Deep Learning Systems on Modern Data Center Architectures Characterizing and Benchmarking Deep Learning Systems on Modern Data Center Architectures Talk at Bench 2018 by Xiaoyi Lu The Ohio State University E-mail: luxi@cse.ohio-state.edu http://www.cse.ohio-state.edu/~luxi

More information

Parallel Deep Network Training

Parallel Deep Network Training Lecture 26: Parallel Deep Network Training Parallel Computer Architecture and Programming CMU 15-418/15-618, Spring 2016 Tunes Speech Debelle Finish This Album (Speech Therapy) Eat your veggies and study

More information

Staged Memory Scheduling

Staged Memory Scheduling Staged Memory Scheduling Rachata Ausavarungnirun, Kevin Chang, Lavanya Subramanian, Gabriel H. Loh*, Onur Mutlu Carnegie Mellon University, *AMD Research June 12 th 2012 Executive Summary Observation:

More information

Cache Performance (H&P 5.3; 5.5; 5.6)

Cache Performance (H&P 5.3; 5.5; 5.6) Cache Performance (H&P 5.3; 5.5; 5.6) Memory system and processor performance: CPU time = IC x CPI x Clock time CPU performance eqn. CPI = CPI ld/st x IC ld/st IC + CPI others x IC others IC CPI ld/st

More information

Deep Learning Accelerators

Deep Learning Accelerators Deep Learning Accelerators Abhishek Srivastava (as29) Samarth Kulshreshtha (samarth5) University of Illinois, Urbana-Champaign Submitted as a requirement for CS 433 graduate student project Outline Introduction

More information

PARALLELIZED IMPLEMENTATION OF LOGISTIC REGRESSION USING MPI

PARALLELIZED IMPLEMENTATION OF LOGISTIC REGRESSION USING MPI PARALLELIZED IMPLEMENTATION OF LOGISTIC REGRESSION USING MPI CSE 633 PARALLEL ALGORITHMS BY PAVAN G JOSHI What is machine learning? Machine learning is a type of artificial intelligence (AI) that provides

More information

Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Network

Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Network Optimizing FPGA-based Accelerator Design for Deep Convolutional Neural Network Chen Zhang 1, Peng Li 3, Guangyu Sun 1,2, Yijin Guan 1, Bingjun Xiao 3, Jason Cong 1,2,3 1 Peking University 2 PKU/UCLA Joint

More information

BHNN: a Memory-Efficient Accelerator for Compressing Deep Neural Network with Blocked Hashing Techniques

BHNN: a Memory-Efficient Accelerator for Compressing Deep Neural Network with Blocked Hashing Techniques BHNN: a Memory-Efficient Accelerator for Compressing Deep Neural Network with Blocked Hashing Techniques Jingyang Zhu 1, Zhiliang Qian 2*, and Chi-Ying Tsui 1 1 The Hong Kong University of Science and

More information

Accelerating Binarized Convolutional Neural Networks with Software-Programmable FPGAs

Accelerating Binarized Convolutional Neural Networks with Software-Programmable FPGAs Accelerating Binarized Convolutional Neural Networks with Software-Programmable FPGAs Ritchie Zhao 1, Weinan Song 2, Wentao Zhang 2, Tianwei Xing 3, Jeng-Hau Lin 4, Mani Srivastava 3, Rajesh Gupta 4, Zhiru

More information

Research challenges in data-intensive computing The Stratosphere Project Apache Flink

Research challenges in data-intensive computing The Stratosphere Project Apache Flink Research challenges in data-intensive computing The Stratosphere Project Apache Flink Seif Haridi KTH/SICS haridi@kth.se e2e-clouds.org Presented by: Seif Haridi May 2014 Research Areas Data-intensive

More information

Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters

Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters Hao Zhang Zeyu Zheng, Shizhen Xu, Wei Dai, Qirong Ho, Xiaodan Liang, Zhiting Hu, Jianliang Wei, Pengtao Xie,

More information

Parallel Deep Network Training

Parallel Deep Network Training Lecture 19: Parallel Deep Network Training Parallel Computer Architecture and Programming How would you describe this professor? Easy? Mean? Boring? Nerdy? Professor classification task Classifies professors

More information

EYWA: a Distributed Graph Engine in the Huawei MIND Platform. Yinglong Xia Huawei Research America 02/09/2017

EYWA: a Distributed Graph Engine in the Huawei MIND Platform. Yinglong Xia Huawei Research America 02/09/2017 EYWA: a Distributed Graph Engine in the Huawei MIND Platform Yinglong Xia Huawei Research America 02/09/2017 About Huawei employees countries revenue 2012 Labs regional HQ R&D centers YoY growth 2 ICT

More information

A Hardware-Friendly Bilateral Solver for Real-Time Virtual-Reality Video

A Hardware-Friendly Bilateral Solver for Real-Time Virtual-Reality Video A Hardware-Friendly Bilateral Solver for Real-Time Virtual-Reality Video Amrita Mazumdar Armin Alaghi Jonathan T. Barron David Gallup Luis Ceze Mark Oskin Steven M. Seitz University of Washington Google

More information

Relay: a high level differentiable IR. Jared Roesch TVMConf December 12th, 2018

Relay: a high level differentiable IR. Jared Roesch TVMConf December 12th, 2018 Relay: a high level differentiable IR Jared Roesch TVMConf December 12th, 2018!1 This represents months of joint work with lots of great folks:!2 TVM Stack Optimization Relay High-Level Differentiable

More information

Managing Dynamic Reconfiguration Overhead in Systems-on-a-Chip Design Using Reconfigurable Datapaths and Optimized Interconnection Networks

Managing Dynamic Reconfiguration Overhead in Systems-on-a-Chip Design Using Reconfigurable Datapaths and Optimized Interconnection Networks Managing Dynamic Reconfiguration Overhead in Systems-on-a-Chip Design Using Reconfigurable Datapaths and Optimized Interconnection Networks Zhining Huang, Sharad Malik Electrical Engineering Department

More information

Integrating NVIDIA Deep Learning Accelerator (NVDLA) with RISC-V SoC on FireSim

Integrating NVIDIA Deep Learning Accelerator (NVDLA) with RISC-V SoC on FireSim Integrating NVIDIA Deep Learning Accelerator (NVDLA) with RISC-V SoC on FireSim Farzad Farshchi, Qijing Huang, Heechul Yun University of Kansas, University of California, Berkeley SiFive Internship Rocket

More information

ECE 471 Embedded Systems Lecture 2

ECE 471 Embedded Systems Lecture 2 ECE 471 Embedded Systems Lecture 2 Vince Weaver http://web.eece.maine.edu/~vweaver vincent.weaver@maine.edu 7 September 2018 Announcements Reminder: The class notes are posted to the website. HW#1 will

More information

Apache Flink- A System for Batch and Realtime Stream Processing

Apache Flink- A System for Batch and Realtime Stream Processing Apache Flink- A System for Batch and Realtime Stream Processing Lecture Notes Winter semester 2016 / 2017 Ludwig-Maximilians-University Munich Prof Dr. Matthias Schubert 2016 Introduction to Apache Flink

More information

COMP9444 Neural Networks and Deep Learning 7. Image Processing. COMP9444 c Alan Blair, 2017

COMP9444 Neural Networks and Deep Learning 7. Image Processing. COMP9444 c Alan Blair, 2017 COMP9444 Neural Networks and Deep Learning 7. Image Processing COMP9444 17s2 Image Processing 1 Outline Image Datasets and Tasks Convolution in Detail AlexNet Weight Initialization Batch Normalization

More information

An introduction to Machine Learning silicon

An introduction to Machine Learning silicon An introduction to Machine Learning silicon November 28 2017 Insight for Technology Investors AI/ML terminology Artificial Intelligence Machine Learning Deep Learning Algorithms: CNNs, RNNs, etc. Additional

More information

Pouya Kousha Fall 2018 CSE 5194 Prof. DK Panda

Pouya Kousha Fall 2018 CSE 5194 Prof. DK Panda Pouya Kousha Fall 2018 CSE 5194 Prof. DK Panda 1 Motivation And Intro Programming Model Spark Data Transformation Model Construction Model Training Model Inference Execution Model Data Parallel Training

More information

Rapid growth of massive datasets

Rapid growth of massive datasets Overview Rapid growth of massive datasets E.g., Online activity, Science, Sensor networks Data Distributed Clusters are Pervasive Data Distributed Computing Mature Methods for Common Problems e.g., classification,

More information

Layer-wise Performance Bottleneck Analysis of Deep Neural Networks

Layer-wise Performance Bottleneck Analysis of Deep Neural Networks Layer-wise Performance Bottleneck Analysis of Deep Neural Networks Hengyu Zhao, Colin Weinshenker*, Mohamed Ibrahim*, Adwait Jog*, Jishen Zhao University of California, Santa Cruz, *The College of William

More information

Index. Springer Nature Switzerland AG 2019 B. Moons et al., Embedded Deep Learning,

Index. Springer Nature Switzerland AG 2019 B. Moons et al., Embedded Deep Learning, Index A Algorithmic noise tolerance (ANT), 93 94 Application specific instruction set processors (ASIPs), 115 116 Approximate computing application level, 95 circuits-levels, 93 94 DAS and DVAS, 107 110

More information

HYPERDRIVE IMPLEMENTATION AND ANALYSIS OF A PARALLEL, CONJUGATE GRADIENT LINEAR SOLVER PROF. BRYANT PROF. KAYVON 15618: PARALLEL COMPUTER ARCHITECTURE

HYPERDRIVE IMPLEMENTATION AND ANALYSIS OF A PARALLEL, CONJUGATE GRADIENT LINEAR SOLVER PROF. BRYANT PROF. KAYVON 15618: PARALLEL COMPUTER ARCHITECTURE HYPERDRIVE IMPLEMENTATION AND ANALYSIS OF A PARALLEL, CONJUGATE GRADIENT LINEAR SOLVER AVISHA DHISLE PRERIT RODNEY ADHISLE PRODNEY 15618: PARALLEL COMPUTER ARCHITECTURE PROF. BRYANT PROF. KAYVON LET S

More information

Integrated Workflow to Implement Embedded Software and FPGA Designs on the Xilinx Zynq Platform Puneet Kumar Senior Team Lead - SPC

Integrated Workflow to Implement Embedded Software and FPGA Designs on the Xilinx Zynq Platform Puneet Kumar Senior Team Lead - SPC Integrated Workflow to Implement Embedded Software and FPGA Designs on the Xilinx Zynq Platform Puneet Kumar Senior Team Lead - SPC 2012 The MathWorks, Inc. 1 Agenda Integrated Hardware / Software Top

More information

Multi-dimensional Parallel Training of Winograd Layer on Memory-Centric Architecture

Multi-dimensional Parallel Training of Winograd Layer on Memory-Centric Architecture The 51st Annual IEEE/ACM International Symposium on Microarchitecture Multi-dimensional Parallel Training of Winograd Layer on Memory-Centric Architecture Byungchul Hong Yeonju Ro John Kim FuriosaAI Samsung

More information

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu

Natural Language Processing CS 6320 Lecture 6 Neural Language Models. Instructor: Sanda Harabagiu Natural Language Processing CS 6320 Lecture 6 Neural Language Models Instructor: Sanda Harabagiu In this lecture We shall cover: Deep Neural Models for Natural Language Processing Introduce Feed Forward

More information

Parallel graph traversal for FPGA

Parallel graph traversal for FPGA LETTER IEICE Electronics Express, Vol.11, No.7, 1 6 Parallel graph traversal for FPGA Shice Ni a), Yong Dou, Dan Zou, Rongchun Li, and Qiang Wang National Laboratory for Parallel and Distributed Processing,

More information

Serial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing

Serial. Parallel. CIT 668: System Architecture 2/14/2011. Topics. Serial and Parallel Computation. Parallel Computing CIT 668: System Architecture Parallel Computing Topics 1. What is Parallel Computing? 2. Why use Parallel Computing? 3. Types of Parallelism 4. Amdahl s Law 5. Flynn s Taxonomy of Parallel Computers 6.

More information

Lecture 20: Neural Networks for NLP. Zubin Pahuja

Lecture 20: Neural Networks for NLP. Zubin Pahuja Lecture 20: Neural Networks for NLP Zubin Pahuja zpahuja2@illinois.edu courses.engr.illinois.edu/cs447 CS447: Natural Language Processing 1 Today s Lecture Feed-forward neural networks as classifiers simple

More information

SDA: Software-Defined Accelerator for general-purpose big data analysis system

SDA: Software-Defined Accelerator for general-purpose big data analysis system SDA: Software-Defined Accelerator for general-purpose big data analysis system Jian Ouyang(ouyangjian@baidu.com), Wei Qi, Yong Wang, Yichen Tu, Jing Wang, Bowen Jia Baidu is beyond a search engine Search

More information

SDACCEL DEVELOPMENT ENVIRONMENT. The Xilinx SDAccel Development Environment. Bringing The Best Performance/Watt to the Data Center

SDACCEL DEVELOPMENT ENVIRONMENT. The Xilinx SDAccel Development Environment. Bringing The Best Performance/Watt to the Data Center SDAccel Environment The Xilinx SDAccel Development Environment Bringing The Best Performance/Watt to the Data Center Introduction Data center operators constantly seek more server performance. Currently

More information

EE382V: System-on-a-Chip (SoC) Design

EE382V: System-on-a-Chip (SoC) Design EE382V: System-on-a-Chip (SoC) Design Lecture 10 Task Partitioning Sources: Prof. Margarida Jacome, UT Austin Prof. Lothar Thiele, ETH Zürich Andreas Gerstlauer Electrical and Computer Engineering University

More information

DEEP LEARNING ACCELERATOR UNIT WITH HIGH EFFICIENCY ON FPGA

DEEP LEARNING ACCELERATOR UNIT WITH HIGH EFFICIENCY ON FPGA DEEP LEARNING ACCELERATOR UNIT WITH HIGH EFFICIENCY ON FPGA J.Jayalakshmi 1, S.Ali Asgar 2, V.Thrimurthulu 3 1 M.tech Student, Department of ECE, Chadalawada Ramanamma Engineering College, Tirupati Email

More information

TOOLS FOR IMPROVING CROSS-PLATFORM SOFTWARE DEVELOPMENT

TOOLS FOR IMPROVING CROSS-PLATFORM SOFTWARE DEVELOPMENT TOOLS FOR IMPROVING CROSS-PLATFORM SOFTWARE DEVELOPMENT Eric Kelmelis 28 March 2018 OVERVIEW BACKGROUND Evolution of processing hardware CROSS-PLATFORM KERNEL DEVELOPMENT Write once, target multiple hardware

More information

High Performance Computing

High Performance Computing High Performance Computing 9th Lecture 2016/10/28 YUKI ITO 1 Selected Paper: vdnn: Virtualized Deep Neural Networks for Scalable, MemoryEfficient Neural Network Design Minsoo Rhu, Natalia Gimelshein, Jason

More information

M.Tech Student, Department of ECE, S.V. College of Engineering, Tirupati, India

M.Tech Student, Department of ECE, S.V. College of Engineering, Tirupati, India International Journal of Scientific Research in Computer Science, Engineering and Information Technology 2018 IJSRCSEIT Volume 3 Issue 5 ISSN : 2456-3307 High Performance Scalable Deep Learning Accelerator

More information

A MATLAB Interface to the GPU

A MATLAB Interface to the GPU Introduction Results, conclusions and further work References Department of Informatics Faculty of Mathematics and Natural Sciences University of Oslo June 2007 Introduction Results, conclusions and further

More information

Tap Position Inference on Smart Phones

Tap Position Inference on Smart Phones Tap Position Inference on Smart Phones Ankush Chauhan 11-29-2017 Outline Introduction Application Architecture Sensor Event Data Sensor Data Collection App Demonstration Scalable Data Collection Pipeline

More information

Research Faculty Summit Systems Fueling future disruptions

Research Faculty Summit Systems Fueling future disruptions Research Faculty Summit 2018 Systems Fueling future disruptions Wolong: A Back-end Optimizer for Deep Learning Computation Jilong Xue Researcher, Microsoft Research Asia System Challenge in Deep Learning

More information

CS 179 Lecture 16. Logistic Regression & Parallel SGD

CS 179 Lecture 16. Logistic Regression & Parallel SGD CS 179 Lecture 16 Logistic Regression & Parallel SGD 1 Outline logistic regression (stochastic) gradient descent parallelizing SGD for neural nets (with emphasis on Google s distributed neural net implementation)

More information

CERN openlab & IBM Research Workshop Trip Report

CERN openlab & IBM Research Workshop Trip Report CERN openlab & IBM Research Workshop Trip Report Jakob Blomer, Javier Cervantes, Pere Mato, Radu Popescu 2018-12-03 Workshop Organization 1 full day at IBM Research Zürich ~25 participants from CERN ~10

More information

Parallel and Distributed Computing with MATLAB The MathWorks, Inc. 1

Parallel and Distributed Computing with MATLAB The MathWorks, Inc. 1 Parallel and Distributed Computing with MATLAB 2018 The MathWorks, Inc. 1 Practical Application of Parallel Computing Why parallel computing? Need faster insight on more complex problems with larger datasets

More information

Realizing the Next Generation of Exabyte-scale Persistent Memory-Centric Architectures and Memory Fabrics

Realizing the Next Generation of Exabyte-scale Persistent Memory-Centric Architectures and Memory Fabrics Realizing the Next Generation of Exabyte-scale Persistent Memory-Centric Architectures and Memory Fabrics Zvonimir Z. Bandic, Sr. Director, Next Generation Platform Technologies Western Digital Corporation

More information

OpenSMART: Single-cycle Multi-hop NoC Generator in BSV and Chisel

OpenSMART: Single-cycle Multi-hop NoC Generator in BSV and Chisel OpenSMART: Single-cycle Multi-hop NoC Generator in BSV and Chisel Hyoukjun Kwon and Tushar Krishna Georgia Institute of Technology Synergy Lab (http://synergy.ece.gatech.edu) hyoukjun@gatech.edu April

More information

Automatic Scaling Iterative Computations. Aug. 7 th, 2012

Automatic Scaling Iterative Computations. Aug. 7 th, 2012 Automatic Scaling Iterative Computations Guozhang Wang Cornell University Aug. 7 th, 2012 1 What are Non-Iterative Computations? Non-iterative computation flow Directed Acyclic Examples Batch style analytics

More information

FPGA-based Supercomputing: New Opportunities and Challenges

FPGA-based Supercomputing: New Opportunities and Challenges FPGA-based Supercomputing: New Opportunities and Challenges Naoya Maruyama (RIKEN AICS)* 5 th ADAC Workshop Feb 15, 2018 * Current Main affiliation is Lawrence Livermore National Laboratory SIAM PP18:

More information

Deep Neural Network Acceleration Framework Under Hardware Uncertainty

Deep Neural Network Acceleration Framework Under Hardware Uncertainty Deep Neural Network Acceleration Framework Under Hardware Uncertainty Mohsen Imani, Pushen Wang, and Tajana Rosing Computer Science and Engineering, UC San Diego, La Jolla, CA 92093, USA {moimani, puw001,

More information

SVM multiclass classification in 10 steps 17/32

SVM multiclass classification in 10 steps 17/32 SVM multiclass classification in 10 steps import numpy as np # load digits dataset from sklearn import datasets digits = datasets. load_digits () # define training set size n_samples = len ( digits. images

More information

Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks

Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks Yu-Hsin Chen 1, Joel Emer 1, 2, Vivienne Sze 1 1 MIT 2 NVIDIA 1 Contributions of This Work A novel energy-efficient

More information

Neural Network based Energy-Efficient Fault Tolerant Architect

Neural Network based Energy-Efficient Fault Tolerant Architect Neural Network based Energy-Efficient Fault Tolerant Architectures and Accelerators University of Rochester February 7, 2013 References Flexible Error Protection for Energy Efficient Reliable Architectures

More information

Speculations about Computer Architecture in Next Three Years. Jan. 20, 2018

Speculations about Computer Architecture in Next Three Years. Jan. 20, 2018 Speculations about Computer Architecture in Next Three Years shuchang.zhou@gmail.com Jan. 20, 2018 About me https://zsc.github.io/ Source-to-source transformation Cache simulation Compiler Optimization

More information

Multithreaded Architectures and The Sort Benchmark. Phil Garcia Hank Korth Dept. of Computer Science and Engineering Lehigh University

Multithreaded Architectures and The Sort Benchmark. Phil Garcia Hank Korth Dept. of Computer Science and Engineering Lehigh University Multithreaded Architectures and The Sort Benchmark Phil Garcia Hank Korth Dept. of Computer Science and Engineering Lehigh University About our Sort Benchmark Based on the benchmark proposed in A measure

More information

Interfacing a High Speed Crypto Accelerator to an Embedded CPU

Interfacing a High Speed Crypto Accelerator to an Embedded CPU Interfacing a High Speed Crypto Accelerator to an Embedded CPU Alireza Hodjat ahodjat @ee.ucla.edu Electrical Engineering Department University of California, Los Angeles Ingrid Verbauwhede ingrid @ee.ucla.edu

More information

A Lightweight YOLOv2:

A Lightweight YOLOv2: FPGA2018 @Monterey A Lightweight YOLOv2: A Binarized CNN with a Parallel Support Vector Regression for an FPGA Hiroki Nakahara, Haruyoshi Yonekawa, Tomoya Fujii, Shimpei Sato Tokyo Institute of Technology,

More information

Grassroots ASPLOS. can we still rethink the hardware/software interface in processors? Raphael kena Poss University of Amsterdam, the Netherlands

Grassroots ASPLOS. can we still rethink the hardware/software interface in processors? Raphael kena Poss University of Amsterdam, the Netherlands Grassroots ASPLOS can we still rethink the hardware/software interface in processors? Raphael kena Poss University of Amsterdam, the Netherlands ASPLOS-17 Doctoral Workshop London, March 4th, 2012 1 Current

More information

OPTIMIZED GPU KERNELS FOR DEEP LEARNING. Amir Khosrowshahi

OPTIMIZED GPU KERNELS FOR DEEP LEARNING. Amir Khosrowshahi OPTIMIZED GPU KERNELS FOR DEEP LEARNING Amir Khosrowshahi GTC 17 Mar 2015 Outline About nervana Optimizing deep learning at assembler level Limited precision for deep learning neon benchmarks 2 About nervana

More information

scc: Cluster Storage Provisioning Informed by Application Characteristics and SLAs

scc: Cluster Storage Provisioning Informed by Application Characteristics and SLAs scc: Cluster Storage Provisioning Informed by Application Characteristics and SLAs Harsha V. Madhyastha*, John C. McCullough, George Porter, Rishi Kapoor, Stefan Savage, Alex C. Snoeren, and Amin Vahdat

More information

Deep Learning with R. Francesca Lazzeri Data Scientist II - Microsoft, AI Research

Deep Learning with R. Francesca Lazzeri Data Scientist II - Microsoft, AI Research with R Francesca Lazzeri - @frlazzeri Data Scientist II - Microsoft, AI Research Agenda with R What is Demo Better understanding of R DL tools Fundamental concepts in Forward Propagation Algorithm Activation

More information

DEEP LEARNING REVIEW. Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature Presented by Divya Chitimalla

DEEP LEARNING REVIEW. Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature Presented by Divya Chitimalla DEEP LEARNING REVIEW Yann LeCun, Yoshua Bengio & Geoffrey Hinton Nature 2015 -Presented by Divya Chitimalla What is deep learning Deep learning allows computational models that are composed of multiple

More information

Matrix Computations and " Neural Networks in Spark

Matrix Computations and  Neural Networks in Spark Matrix Computations and " Neural Networks in Spark Reza Zadeh Paper: http://arxiv.org/abs/1509.02256 Joint work with many folks on paper. @Reza_Zadeh http://reza-zadeh.com Training Neural Networks Datasets

More information

HRL: Efficient and Flexible Reconfigurable Logic for Near-Data Processing

HRL: Efficient and Flexible Reconfigurable Logic for Near-Data Processing HRL: Efficient and Flexible Reconfigurable Logic for Near-Data Processing Mingyu Gao and Christos Kozyrakis Stanford University http://mast.stanford.edu HPCA March 14, 2016 PIM is Coming Back End of Dennard

More information

Primavera Compression Server 5.0 Service Pack 1 Concept and Performance Results

Primavera Compression Server 5.0 Service Pack 1 Concept and Performance Results - 1 - Primavera Compression Server 5.0 Service Pack 1 Concept and Performance Results 1. Business Problem The current Project Management application is a fat client. By fat client we mean that most of

More information

Two FPGA-DNN Projects: 1. Low Latency Multi-Layer Perceptrons using FPGAs 2. Acceleration of CNN Training on FPGA-based Clusters

Two FPGA-DNN Projects: 1. Low Latency Multi-Layer Perceptrons using FPGAs 2. Acceleration of CNN Training on FPGA-based Clusters Two FPGA-DNN Projects: 1. Low Latency Multi-Layer Perceptrons using FPGAs 2. Acceleration of CNN Training on FPGA-based Clusters *Argonne National Lab +BU & USTC Presented by Martin Herbordt Work by Ahmed

More information

Energy Efficient K-Means Clustering for an Intel Hybrid Multi-Chip Package

Energy Efficient K-Means Clustering for an Intel Hybrid Multi-Chip Package High Performance Machine Learning Workshop Energy Efficient K-Means Clustering for an Intel Hybrid Multi-Chip Package Matheus Souza, Lucas Maciel, Pedro Penna, Henrique Freitas 24/09/2018 Agenda Introduction

More information

Parallel and Distributed Computing with MATLAB Gerardo Hernández Manager, Application Engineer

Parallel and Distributed Computing with MATLAB Gerardo Hernández Manager, Application Engineer Parallel and Distributed Computing with MATLAB Gerardo Hernández Manager, Application Engineer 2018 The MathWorks, Inc. 1 Practical Application of Parallel Computing Why parallel computing? Need faster

More information

Machine Learning on FPGAs

Machine Learning on FPGAs Machine Learning on FPGAs Jason Cong Chancellor s Professor, UCLA Director, Center for Domain-Specific Computing cong@cs.ucla.edu http://cadlab.cs.ucla.edu/~cong 1 Impacts of deep learning for many applications

More information

OPERATIONALIZING MACHINE LEARNING USING GPU ACCELERATED, IN-DATABASE ANALYTICS

OPERATIONALIZING MACHINE LEARNING USING GPU ACCELERATED, IN-DATABASE ANALYTICS OPERATIONALIZING MACHINE LEARNING USING GPU ACCELERATED, IN-DATABASE ANALYTICS 1 Why GPUs? A Tale of Numbers 100x Performance Increase Infrastructure Cost Savings Performance 100x gains over traditional

More information

S8873 GBM INFERENCING ON GPU. Shankara Rao Thejaswi Nanditale, Vinay Deshpande

S8873 GBM INFERENCING ON GPU. Shankara Rao Thejaswi Nanditale, Vinay Deshpande S8873 GBM INFERENCING ON GPU Shankara Rao Thejaswi Nanditale, Vinay Deshpande Introduction AGENDA Objective Experimental Results Implementation Details Conclusion 2 INTRODUCTION 3 BOOSTING What is it?

More information

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time

Cache Performance! ! Memory system and processor performance:! ! Improving memory hierarchy performance:! CPU time = IC x CPI x Clock time Cache Performance!! Memory system and processor performance:! CPU time = IC x CPI x Clock time CPU performance eqn. CPI = CPI ld/st x IC ld/st IC + CPI others x IC others IC CPI ld/st = Pipeline time +

More information

Templates. for scalable data analysis. 2 Synchronous Templates. Amr Ahmed, Alexander J Smola, Markus Weimer. Yahoo! Research & UC Berkeley & ANU

Templates. for scalable data analysis. 2 Synchronous Templates. Amr Ahmed, Alexander J Smola, Markus Weimer. Yahoo! Research & UC Berkeley & ANU Templates for scalable data analysis 2 Synchronous Templates Amr Ahmed, Alexander J Smola, Markus Weimer Yahoo! Research & UC Berkeley & ANU Running Example Inbox Spam Running Example Inbox Spam Spam Filter

More information

HPE Deep Learning Cookbook: Recipes to Run Deep Learning Workloads. Natalia Vassilieva, Sergey Serebryakov

HPE Deep Learning Cookbook: Recipes to Run Deep Learning Workloads. Natalia Vassilieva, Sergey Serebryakov HPE Deep Learning Cookbook: Recipes to Run Deep Learning Workloads Natalia Vassilieva, Sergey Serebryakov Deep learning ecosystem today Software Hardware 2 HPE s portfolio for deep learning Government,

More information

Accelerator Spectrum

Accelerator Spectrum Active/HardBD Panel Mohammad Sadoghi, Purdue University Sebastian Breß, German Research Center for Artificial Intelligence Vassilis J. Tsotras, University of California Riverside Accelerator Spectrum Commodity

More information

Introduction to Neural Networks

Introduction to Neural Networks ECE 5775 (Fall 17) High-Level Digital Design Automation Introduction to Neural Networks Ritchie Zhao, Zhiru Zhang School of Electrical and Computer Engineering Rise of the Machines Neural networks have

More information

Hardware Accelerators

Hardware Accelerators Hardware Accelerators José Costa Software for Embedded Systems Departamento de Engenharia Informática (DEI) Instituto Superior Técnico 2014-04-08 José Costa (DEI/IST) Hardware Accelerators 1 Outline Hardware

More information