Lecture 3: CUDA Programming & Compiler Front End

Similar documents
Lecture 2: CUDA Programming

CS415 Compilers. Lexical Analysis

Lexical Analysis. Note by Baris Aktemur: Our slides are adapted from Cooper and Torczon s slides that they prepared for COMP 412 at Rice.

CS Lecture 2. The Front End. Lecture 2 Lexical Analysis

The Front End. The purpose of the front end is to deal with the input language. Perform a membership test: code source language?

Lexical Analysis. Introduction

CS 314 Principles of Programming Languages

Lexical Analysis - An Introduction. Lecture 4 Spring 2005 Department of Computer Science University of Alabama Joel Jones

CS671 Parallel Programming in the Many-Core Era

CS 314 Principles of Programming Languages

CS 314 Principles of Programming Languages. Lecture 3

Lexical Analysis. Chapter 2

Lexical Analysis. Lecture 2-4

Lexical Analysis. Lecture 3-4

Introduction to Lexical Analysis

Regular Expressions. Agenda for Today. Grammar for a Tiny Language. Programming Language Specifications

CSc 453 Lexical Analysis (Scanning)

Lexical Analysis. Dragon Book Chapter 3 Formal Languages Regular Expressions Finite Automata Theory Lexical Analysis using Automata

CSEP 501 Compilers. Languages, Automata, Regular Expressions & Scanners Hal Perkins Winter /8/ Hal Perkins & UW CSE B-1

Lexical Analysis. COMP 524, Spring 2014 Bryan Ward

Implementation of Lexical Analysis

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

Implementation of Lexical Analysis

Interpreter. Scanner. Parser. Tree Walker. read. request token. send token. send AST I/O. Console

Compiler Construction

Optimizing Parallel Reduction in CUDA

Non-deterministic Finite Automata (NFA)

Lexical Analysis. Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast!

CS516 Programming Languages and Compilers II

CSE 413 Programming Languages & Implementation. Hal Perkins Autumn 2012 Grammars, Scanners & Regular Expressions

Optimizing Parallel Reduction in CUDA. Mark Harris NVIDIA Developer Technology

Formal Languages and Compilers Lecture VI: Lexical Analysis

Optimizing Parallel Reduction in CUDA. Mark Harris NVIDIA Developer Technology

Figure 2.1: Role of Lexical Analyzer

COMPILER DESIGN UNIT I LEXICAL ANALYSIS. Translator: It is a program that translates one language to another Language.

CSE 413 Programming Languages & Implementation. Hal Perkins Winter 2019 Grammars, Scanners & Regular Expressions

Administrivia. Lexical Analysis. Lecture 2-4. Outline. The Structure of a Compiler. Informal sketch of lexical analysis. Issues in lexical analysis

Finite automata. We have looked at using Lex to build a scanner on the basis of regular expressions.

Compiler phases. Non-tokens

Last lecture CMSC330. This lecture. Finite Automata: States. Finite Automata. Implementing Regular Expressions. Languages. Regular expressions

Announcements! P1 part 1 due next Tuesday P1 part 2 due next Friday

The Language for Specifying Lexical Analyzer

2. Lexical Analysis! Prof. O. Nierstrasz!

Zhizheng Zhang. Southeast University

Lexical Analysis. Lecture 3. January 10, 2018

CSE 3302 Programming Languages Lecture 2: Syntax

Compiler Construction

CPSC 434 Lecture 3, Page 1

Compiler Construction LECTURE # 3

CS 179: GPU Programming. Lecture 7

Front End. Hwansoo Han

Dr. D.M. Akbar Hussain

Part 5 Program Analysis Principles and Techniques

CS321 Languages and Compiler Design I. Winter 2012 Lecture 4

Languages and Compilers

Implementation of Lexical Analysis

CSCI312 Principles of Programming Languages!

Formal Languages and Grammars. Chapter 2: Sections 2.1 and 2.2

CS377P Programming for Performance GPU Programming - II

ECS 120 Lesson 7 Regular Expressions, Pt. 1

Writing a Lexical Analyzer in Haskell (part II)

Parsing II Top-down parsing. Comp 412

CS 536 Introduction to Programming Languages and Compilers Charles N. Fischer Lecture 2

CS415 Compilers. Syntax Analysis. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Lecture 4: Syntax Specification

Theoretical Part. Chapter one:- - What are the Phases of compiler? Answer:

MIT Specifying Languages with Regular Expressions and Context-Free Grammars

CS412/413. Introduction to Compilers Tim Teitelbaum. Lecture 2: Lexical Analysis 23 Jan 08

1. INTRODUCTION TO LANGUAGE PROCESSING The Language Processing System can be represented as shown figure below.

David Griol Barres Computer Science Department Carlos III University of Madrid Leganés (Spain)

CSE450. Translation of Programming Languages. Lecture 20: Automata and Regular Expressions

UNIT -2 LEXICAL ANALYSIS

Scanners. Xiaokang Qiu Purdue University. August 24, ECE 468 Adapted from Kulkarni 2012

Syntactic Analysis. CS345H: Programming Languages. Lecture 3: Lexical Analysis. Outline. Lexical Analysis. What is a Token? Tokens

Languages, Automata, Regular Expressions & Scanners. Winter /8/ Hal Perkins & UW CSE B-1

Lexical Analysis. Implementation: Finite Automata

CMSC 350: COMPILER DESIGN

Lexical Analysis. Chapter 1, Section Chapter 3, Section 3.1, 3.3, 3.4, 3.5 JFlex Manual

Computer Science Department Carlos III University of Madrid Leganés (Spain) David Griol Barres

Outline. 1 Scanning Tokens. 2 Regular Expresssions. 3 Finite State Automata

Implementation of Lexical Analysis. Lecture 4

2010: Compilers REVIEW: REGULAR EXPRESSIONS HOW TO USE REGULAR EXPRESSIONS

CS308 Compiler Principles Lexical Analyzer Li Jiang

Bottom-Up Parsing. Lecture 11-12

About the Tutorial. Audience. Prerequisites. Copyright & Disclaimer. Compiler Design

GPU programming: CUDA basics. Sylvain Collange Inria Rennes Bretagne Atlantique

Monday, August 26, 13. Scanners

Lexical Analysis 1 / 52

G52LAC Languages and Computation Lecture 6

CS415 Compilers Overview of the Course. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Front End: Lexical Analysis. The Structure of a Compiler

Wednesday, September 3, 14. Scanners

MIT Specifying Languages with Regular Expressions and Context-Free Grammars. Martin Rinard Massachusetts Institute of Technology

Cunning Plan. Informal Sketch of Lexical Analysis. Issues in Lexical Analysis. Specifying Lexers

CS606- compiler instruction Solved MCQS From Midterm Papers

The Structure of a Syntax-Directed Compiler

Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

CS5371 Theory of Computation. Lecture 8: Automata Theory VI (PDA, PDA = CFG)

Lexical Analysis (ASU Ch 3, Fig 3.1)

Transcription:

CS 515 Programming Language and Compilers I Lecture 3: CUDA Programming & Compiler Front End Zheng (Eddy) Zhang Rutgers University Fall 2017, 9/19/2017 1

Lecture 3A: GPU Programming 2

3 Review: Performance Hazard I Control Divergence Thread Divergence Caused by Control Flows How about this? tid: 0 1 2 3 4 5 6 7 A[ ]: 0 0 6 0 0 2 4 1 if (A[tid]) { do some work; } else { do nothing; } OR this? for (i=0;i<a[tid]; i++) { } May degrade performance by up to warp_size times. (warp size = 32 in modern GPUs) Optional Reading: Zhang etal. On-the-Fly Elimination of Dynamic Irregularities for GPU Computing, ASPLOS 11

4 Review: Performance Hazard II Global Memory Coalescing Data is fetched as a contiguous memory chunk (a cache block/line) - A cache block/line is typically 128 byte - The purpose is to utilize the wide DRAM burst for maximum memory level parallelism (MLP) A thread warp is ready when all its threads operands are ready - Coalescing data operands for a thread warp into as few contiguous memory chunks as possible - Assume we have a memory access pattern for a thread... = A[P[tid]]; P[ ] = { 0, 1, 2, 3, 4, 5, 6, 7} P[ ] = { 0, 5, 1, 7, 4, 3, 6, 2} Coalesced Accesses tid: 0 1 2 3 4 5 6 7 Non-Coalesced Accesses tid: 0 1 2 3 4 5 6 7 A[ ]: A[ ]: { { a cache block. a cache block.

5 Review: Performance Hazard III Shared Memory Bank Conflicts Shared Memory - Typically 16 or 32 banks - Successive 32-bit words are assigned to successive banks - A fixed stride access may cause bank conflicts - Maximizing bank level parallelism is important Threads Banks Reduce Bank Conflicts - Padding, might waste some shared memory space. - Data layout transformation, i.e., array of struct to structor of array. - Thread-data remapping, i.e., the tasks that are mapped to different banks are executed at the same time.

Review: Performance Hazard III Shared Memory Bank Conflicts 6

Review: Performance Hazard III Shared Memory Bank Conflicts 7

8 Case Study 2 Parallel Reduction What is a reduction operation? - A binary operation and a roll-up operation - Let A = [a0, a1, a2, a3,, a(n-1)], let be an associative binary operator with identify element I reduce(, A) = [a0 a1 a2 a3 a(n-1)] 3 1 7 0 4 1 6 3 int sum = 0; for ( int i = 0; i < N; i++ ) 4 7 5 9 sum = sum + A[i]; 11 14 25 Sequential Parallel

9 Case Study 2 Parallel Reduction First Implementation tid = threadidx.x; for ( s=1; s < blockdim.x; s *= 2) { if ( tid % (2*s) == 0 ) { sdata[ tid ] += sdata[ tid + s ]; } } syncthreads(); Any GPU performance hazard here? Thread Divergence

10 Case Study 2 Parallel Reduction First Implementation Reducing 16 Million Elements tid = threadidx.x; Implementation 1: for ( s=1; Performance: s < blockdim.x; Throughput s *= 2) = 1.5325 GB/s, Time = 0.04379 s { if ( tid % (2*s) == 0 ) { Implementation 2: sdata[ Performance: tid ] += sdata[ Throughput tid + s ]; = 1.9286 GB/s, Time = 0.03480 s } Implementation 3: syncthreads(); Performance: Throughput = 2.6062 GB/s, Time = 0.02575 s } Any GPU performance hazard here? Thread Divergence

11 Case Study 2 Parallel Reduction Reduce Thread Divergence Second Version First Version Thread-Task Mapping Changed!

12 Case Study 2 Parallel Reduction Reduce Thread Divergence tid = threadidx.x; tid = threadidx.x; for (s=1; s < blockdim.x; s *= 2) { for ( s=1; s < blockdim.x; s *= 2{ int index = 2 * s * tid; if ( index < blockdim.x) { } sdata[index] += sdata[index + s]; if ( tid % (2*s) == 0 ) { } sdata[ tid ] += sdata[ tid + s ]; syncthreads(); syncthreads(); } } Second Version First Version Thread-Task Mapping Changed!

13 Case Study 2 Parallel Reduction Implementation 2 Reducing 16 Million Elements tid = threadidx.x; Implementation 1: for (s=1; s < blockdim.x; s *= 2) { Performance: Throughput = 1.5325 GB/s, Time = 0.04379 s int index = 2 * s * tid; if ( index < blockdim.x) { Implementation 2: sdata[index] += sdata[index + s]; Performance: Throughput = 1.9286 GB/s, Time = 0.03480 s } Implementation 3: syncthreads(); Performance: Throughput = 2.6062 GB/s, Time = 0.02575 s } Any performance hazard? Potential Shared Memory Bank Conflicts

14 Case Study 2 Parallel Reduction Reduce Shared Memory Bank Conflicts Third Version Second Version Let Logically Contiguous Thread Access Contiguous Shared Memory Locations

15 Case Study 2 Parallel Reduction Third Implementation (utilize implicit intra-warp synchronization) tid = threadidx.x; Reducing 16 Million Elements Implementation 1: for ( s = blockdim.x/2; s>0; s >>=1 ) { Performance: Throughput = 1.5325 GB/s, Time = 0.04379 s if ( tid < s ) { Implementation 2: sdata[tid] += sdata[tid + s]; Performance: Throughput = 1.9286 GB/s, Time = 0.03480 s } syncthreads(); Implementation 3: } Performance: Throughput = 2.6062 GB/s, Time = 0.02575 s Code: /ilab/users/zz124/cs515_2017/samples/6_advanced/reduction

16 Review: Synchronization Primitive Within a thread block - Barrier: syncthreads( ) - All computation by threads in the thread block before the barrier complete before any computation by threads after the barrier begins - Barriers are a conservative way to express dependencies - Barriers divide computation into phases - In conditional code, the condition must be uniform across the block Across thread blocks - Implicit synchronization at the end of a GPU kernel - A barrier in the middle of a GPU kernel can be implemented using atomic compare-and-swap (CAS) and memory fence operations Optional Reading: Xiao etal. Inter-block GPU communication via fast barrier synchronization, IPDPS 10

17 Scan Let A = [a0, a1, a2, a3,, an-1] Let be an associative binary operator with identify element I scan_inclusive(, A) = [a0, a0 a1, a0 a1 a2, scan_exclusive(, A) = [I, a0, a0 a1, a0 a1 a2, ] A prefix sum is scan_inclusive scan_inclusive(+, A)

18 Prefix Sum Let A = [a0, a1, a2, a3,, an-1] Let the binary operator be +, prefix_sum(, A) = [a0, a0 a1, a0 a1 a2, ] Sequential Code: int PrefixSum[N]; PrefixSum[0] = 0; for ( i = 1; i < N; i++ ) PrefixSum[i] = PrefixSum[i-1] + A[i]; A[ ] PrefixSum(+,A) 3 1 7 0 4 1 6 3 3 4 11 11 15 16 22 25

19 How Would You Parallelize Scan? Recall the parallel reduction algorithm A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15

20 How Would You Parallelize Scan? Recall the parallel reduction algorithm A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15

21 How Would You Parallelize Scan? Let P = scan_exclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 P[ ] p0 p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15

22 How Would You Parallelize Scan? Let P = scan_exclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 P[ ] p0 p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15

23 How Would You Parallelize Scan? Let P = scan_exclusive(, A) 14 = 8 + 4 + 2 p14 = a0-7 + a8-11 + a12-13 A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 P[ ] p0 p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15

24 How Would You Parallelize Scan? Let P = scan_exclusive(, A) 14 = 8 + 4 + 2 p14 = a0-7 + a8-11 + a12-13 A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 A [ ] a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 P[ ] p0 p1 p2 p3 p4 p5 p6 p7 p8 p9 p10 p11 p12 p13 p14 p15

25 Parallel Scan (Work Efficient Version) Let P = scan_exclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 A [ ] a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 The Up-Sweep (Reduce) Phase

26 Parallel Scan Code Let P = scan_exclusive(, A) A[ ] Up-Sweep Pseudocode: a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 for d=0 to (log2 n - 1) { a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 } if ( tid % (2 d+1 ) == 0 ) { } a[tid + 2 d+1-1] += a[tid + 2 d -1] barrier_synchronization(); a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 A [ ] a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 The Up-Sweep (Reduce) Phase

27 Parallel Scan Code Let P = scan_exclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 Asymptotic Complexity? a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 A [ ] a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 The Up-Sweep (Reduce) Phase

28 Parallel Scan Code Let P = scan_exclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 Asymptotic Complexity? a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 O(n) a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 A [ ] a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 The Up-Sweep (Reduce) Phase

29 Parallel Scan (work efficient version) Let P = scan_exclusive(, A) A [ ] a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 0 a0 a0-1 a2 a0-3 a4 a4-5 a6 0 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-7 a0 a0-1 a2 0 a4 a4-5 a6 a0-3 a8 a8-9 a10 a0-7 a12 a12-13 a14 a0-11 a0 0 a2 a0-1 a4 a0-3 a6 a0-5 a8 a0-7 a10 a0-9 a12 a0-11 a14 a0-13 P[ ] 0 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 The Down-Sweep Phase

30 Parallel Scan (work efficient version) Let P = scan_exclusive(, A) A [ ] a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 0 a0 a0-1 a2 a0-3 a4 a4-5 a6 0 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-7 a0 a0-1 a2 0 a4 a4-5 a6 a0-3 a8 a8-9 a10 a0-7 a12 a12-13 a14 a0-11 a0 0 a2 a0-1 a4 a0-3 a6 a0-5 a8 a0-7 a10 a0-9 a12 a0-11 a14 a0-13 P[ ] 0 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 The Down-Sweep Phase

31 Parallel Scan Code (work efficient version) Let P = scan_exclusive(, Down-sweep A) Pseudocode: A [ ] x[n-1] 0 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 for d=(log2n - 1) to 0 { a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 0 if ( tid % 2 d+1 == 0 ) { tmp a[tid +2 d -1] a0 a0-1 a2 a0-3 a4 a4-5 a6 0 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-7 a[tid +2 d -1] a[tid +2 d+1-1] a0 a0-1 a2 0 a4 a4-5 a6 a0-3 a8 a8-9 a10 a0-7 a12 a12-13 a14 a0-11 a[tid +2 d+1-1] tmp+a[tid +2 d+1-1] } a0 0 a2 a0-1 a4 a0-3 a6 a0-5 a8 a0-7 a10 a0-9 a12 a0-11 a14 a0-13 barrier_synchronization(); P[ ] } 0 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 The Down-Sweep Phase

32 Parallel Scan Code (work efficient version) Let P = scan_exclusive(, A) A [ ] a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 0 Asymptotic Complexity? a0 a0-1 a2 a0-3 a4 a4-5 a6 0 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-7 a0 a0-1 a2 0 a4 a4-5 a6 a0-3 a8 a8-9 a10 a0-7 a12 a12-13 a14 a0-11 a0 0 a2 a0-1 a4 a0-3 a6 a0-5 a8 a0-7 a10 a0-9 a12 a0-11 a14 a0-13 P[ ] 0 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 The Down-Sweep Phase

33 Parallel Scan Code (work efficient version) Let P = scan_exclusive(, A) A [ ] a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 0 Asymptotic Complexity? a0 a0-1 a2 a0-3 a4 a4-5 a6 0 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-7 O(n) a0 a0-1 a2 0 a4 a4-5 a6 a0-3 a8 a8-9 a10 a0-7 a12 a12-13 a14 a0-11 a0 0 a2 a0-1 a4 a0-3 a6 a0-5 a8 a0-7 a10 a0-9 a12 a0-11 a14 a0-13 P[ ] 0 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 The Down-Sweep Phase

34 Simple Scan (when less work than parallelism) Let P = scan_inclusive(, A) A [ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a2-3 a3-4 a4-5 a5-6 a6-7 a7-8 a8-9 a9-10 a10-11 a11-12 a12-13 a13-14 a14-15 a0 a0-1 a0-2 a0-3 a1-4 a2-5 a3-6 a4-7 a5-8 a6-9 a7-10 a8-11 a9-12 a10-13 a11-14 a12-15 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a1-8 a2-9 a3-10 a4-11 a5-12 a6-13 a7-14 a8-15 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 P[ ] a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 One Phase

35 Simple Scan (when less work than parallelism) Let P = scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a2-3 a3-4 a4-5 a5-6 a6-7 a7-8 a8-9 a9-10 a10-11 a11-12 a12-13 a13-14 a14-15 a0 a0-1 a0-2 a0-3 a1-4 a2-5 a3-6 a4-7 a5-8 a6-9 a7-10 a8-11 a9-12 a10-13 a11-14 a12-15 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a1-8 a2-9 a3-10 a4-11 a5-12 a6-13 a7-14 a8-15 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 P[ ] a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 One Phase

36 Simple Scan Code Let P = scan_inclusive(, One-pass A) Simple Scan Pseudocode: A[ ] for d= 0 to (log2n - 1) { a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 if ( tid >= 2 d+1-1) { a0 a0-1 a1-2 a2-3 a3-4 a4-5 a5-6 a6-7 a7-8 a8-9 a9-10 a10-11 a11-12 a12-13 a13-14 a14-15 tmp a[tid - 2 d ] + a[tid] } a0 a0-1 a0-2 a0-3 a1-4 a2-5 a3-6 a4-7 a5-8 a6-9 a7-10 a8-11 a9-12 a10-13 a11-14 a12-15 barrier_synchronization(); if ( tid >= 2 d+1-1) { a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a1-8 a2-9 a3-10 a4-11 a5-12 a6-13 a7-14 a8-15 } a[tid] = tmp; a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 barrier_synchronization(); P[ ] } a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 One Phase

37 Simple Scan Code Let P = scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a2-3 a3-4 a4-5 a5-6 a6-7 a7-8 a8-9 a9-10 a10-11 a11-12 a12-13 a13-14 a14-15 Asymptotic Complexity? a0 a0-1 a0-2 a0-3 a1-4 a2-5 a3-6 a4-7 a5-8 a6-9 a7-10 a8-11 a9-12 a10-13 a11-14 a12-15 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a1-8 a2-9 a3-10 a4-11 a5-12 a6-13 a7-14 a8-15 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 P[ ] a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 One Phase

38 Simple Scan Code Let P = scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a2-3 a3-4 a4-5 a5-6 a6-7 a7-8 a8-9 a9-10 a10-11 a11-12 a12-13 a13-14 a14-15 Asymptotic Complexity? a0 a0-1 a0-2 a0-3 a1-4 a2-5 a3-6 a4-7 a5-8 a6-9 a7-10 a8-11 a9-12 a10-13 a11-14 a12-15 O(n*log(n)) a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a1-8 a2-9 a3-10 a4-11 a5-12 a6-13 a7-14 a8-15 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 P[ ] a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 One Phase

39 Question Assume work-efficient scan is performed in shared memory - What are the potential approaches we can use to reduce bank conflicts?

40 Segmented Scan Let A = [[a0, a1, a2], [a3,a4], [a5, a6, a7, a8], ] Let = + segmented_scan_inclusive(, A) = [[a0, a0 a1, a0 a1 a2], [a3, a3 a4], [a5, a5 a6, a5 a6 a7, a5 a6 a7 a8], ]

41 Segmented Scan Let P = segmented_scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a2-3 a3-4 a4-5 a5-6 a6-7 a7-8 a8-9 a9-10 a10-11 a11-12 a12-13 a13-14 a14-15 a0 a0-1 a0-2 a0-3 a1-4 a2-5 a3-6 a4-7 a5-8 a6-9 a7-10 a8-11 a9-12 a10-13 a11-14 a12-15 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a1-8 a2-9 a3-10 a4-11 a5-12 a6-13 a7-14 a8-15 a0 a0-1 a0-2 a0-3 a0-4 a0-5 a0-6 a0-7 a0-8 a0-9 a0-10 a0-11 a0-12 a0-13 a0-14 a0-15 P[ ] a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 One Phase

42 Segmented Scan Let P = segmented_scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 P[ ] a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 One Phase

43 Segmented Scan Let P = segmented_scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 P[ ] a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 One Phase

44 Segmented Scan Let P = segmented_scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a3 a3-4 a5 a5-6 a6-7 a7-8 a9 a9-10 a11 a11-12 a12-13 a13-14 a14-15 P[ ] a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 One Phase

45 Segmented Scan Let P = segmented_scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a3 a3-4 a5 a5-6 a6-7 a7-8 a9 a9-10 a11 a11-12 a12-13 a13-14 a14-15 P[ ] a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 One Phase

46 Segmented Scan Let P = segmented_scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a3 a3-4 a5 a5-6 a6-7 a7-8 a9 a9-10 a11 a11-12 a12-13 a13-14 a14-15 a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a12-15 P[ ] a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 One Phase

47 Segmented Scan Let P = segmented_scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a3 a3-4 a5 a5-6 a6-7 a7-8 a9 a9-10 a11 a11-12 a12-13 a13-14 a14-15 a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a12-15 P[ ] a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 One Phase

48 Segmented Scan Let P = segmented_scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a3 a3-4 a5 a5-6 a6-7 a7-8 a9 a9-10 a11 a11-12 a12-13 a13-14 a14-15 a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a12-15 a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 P[ ] a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 One Phase

49 Segmented Scan Let P = segmented_scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a3 a3-4 a5 a5-6 a6-7 a7-8 a9 a9-10 a11 a11-12 a12-13 a13-14 a14-15 a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a12-15 a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 P[ ] a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 One Phase

50 Segmented Scan Let P = segmented_scan_inclusive(, A) A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a1-2 a3 a3-4 a5 a5-6 a6-7 a7-8 a9 a9-10 a11 a11-12 a12-13 a13-14 a14-15 a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a12-15 a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 P[ ] a0 a0-1 a0-2 a3 a3-4 a5 a5-6 a5-7 a5-8 a9 a9-10 a11 a11-12 a11-13 a11-14 a11-15 One Phase

51 Segmented Scan Code Let P = segmented_scan_inclusive(, A) One-pass Segmented Scan Pseudocode: for d= 0 to (log2n - 1) { if ( tid >= 2 d+1-1 && flag[tid] == flag[tid - 2 d ]) { tmp a[tid - 2 d ] + a[tid] } barrier_synchronization(); if ( tid >= 2 d+1-1 && flag[tid] == flag[tid - 2 d ]) { a[tid] = tmp; } barrier_synchronization(); } P = A;

52 Segmented Scan for the Work Efficient Version Any idea? A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a2-3 a4 a4-5 a6 a6-7 a8 a8-9 a10 a10-11 a12 a12-13 a14 a14-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a4-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a8-15 a0 a0-1 a2 a0-3 a4 a4-5 a6 a0-7 a8 a8-9 a10 a8-11 a12 a12-13 a14 a0-15

53 Segmented Scan for the Work Efficient Version) Any idea? A[ ] a0 a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 a0 a0-1 a2 a3 a4 a5 a6 a6-7 a8 a9 a10 a11 a12 a12-13 a14 a14-15 a0 a0-1 a2 a3 a4 a5 a6 a5-7 a8 a9 a10 a11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a3 a4 a5 a6 a5-7 a8 a9 a10 a11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a3 a4 a5 a6 a5-7 a8 a9 a10 a11 a12 a12-13 a14 a12-15 Up-Sweep Phase

54 Segmented Scan for the Work Efficient Version) Any idea? A [ ] a0 a0-1 a2 a3 a4 a5 a6 a5-7 a8 a9 a10 a11 a12 a12-13 a14 a12-15 a0 a0-1 a2 a3 a4 a5 a6 a5-7 a8 a9 a10 a11 a12 a12-13 a14 0 a0 a0-1 a2 a3 a4 a5 a6 0 a8 a9 a10 a11 a12 a12-13 a14 a5-7 a0 a0-1 a2 0 a4 a5 a6 a3 a8 a9 a10 a5-7 a12 a12-13 a14 a11 a0 0 a2 a0-1 a4 a3 a6 a5 a8 a5-7 a10 a9 a12 a11 a14 a11-13 P[ ] 0 a0 a0-1 0 a3 0 a5 a5-6 a5-7 0 a9 0 a11 a11-12 a11-13 a11-14 The Down-Sweep Phase

55 Segmented Scan for the Work Efficient Version) Any idea? Extra-Credit Assignment A [ ] a0 a0-1 a2 a3 a4 a5 a6 a5-7 a8 a9 a10 a11 a12 a12-13 a14 a12-15 Design your own algorithm for the segmented work-efficient scan a0 a0-1 a2 a3 a4 a5 a6 a5-7 a8 a9 a10 a11 a12 a12-13 a14 0 1. Specify the task that every thread does 2. Specify the data every thread works on a0 a0-1 a2 a3 a4 a5 a6 0 a8 a9 a10 a11 a12 a12-13 a14 a5-7 3. Is there any GPU-specific performance hazard in your algorithm? 4. If yes to question 3, how are you going to resolve it? a0 a0-1 a2 0 a4 a5 a6 a3 a8 a9 a10 a5-7 a12 a12-13 a14 a11 a0 0 a2 a0-1 a4 a3 a6 a5 a8 a5-7 a10 a9 a12 a11 a14 a11-13 P[ ] 0 a0 a0-1 0 a3 0 a5 a5-6 a5-7 0 a9 0 a11 a11-12 a11-13 a11-14 The Down-Sweep Phase

56 Hierarchical Scan Perform Scan in Local Chunk of Data first and Merge the Results later 64-element scan local_scan_1 64-element scan local_scan_2 64-element scan local_scan_3 64-element scan local_scan_4 Get the last element of each local scan a0-63 a64-127 a128-255 a256-375 Perform scan again a0-63 a0-127 a0-255 a0-375 Add base to all elements in each local scan 64-element scan local_scan_1 add base to local_scan_2 add base to local_scan_3 add base to local_scan_32

57 Hierarchical Segmented Scan Perform Scan in Local Chunk of Data first and Merge the Results later 64-element scan local_seg_scan_1 64-element scan local_seg_scan_2 64-element scan local_seg_scan_3 64-element scan local_seg_scan_4 Get the last element of each local segmented scan ax-63 ay-127 az-255 aw-375 Perform segmented scan again ax-63 ax-127 az-255 aw-375 If the base is in the same segment as the element in local segmented scan, add the base to the first segment in each local result local_seg_scan_1 cond. add base to local_seg_scan_2 cond. add base to local_seg_scan_3 cond. add base to local_seg_scan_32

58 An Example of Parallel Segmented Scan Sparse Matrix Vector Multiplication (SpMV) Let A = [ [3*x0, x2], [2*x1], [ 4*x2 ], [2*x1, 6*x2,, 8*xn-1] ], and = +, for SpMV, y = segmented_scan(, A) Project 1 requires a segmented scan at the warp level

59 Lecture 3B: Lexical and Syntax Analysis I (The lectures are based on the slides copyrighted by Keith Cooper and Linda Torczon from Rice University.)

60 Review Three Pass Compiler Front End IR Middle End IR Back End Machine code Errors Parser IR Scanner and Parser collaborate to check the syntax of the input program. 1. Scanner maps stream of characters into words (words are the fundamental unit of syntax) stream of characters Scanner Semantic Elaboration Scanner produces a stream of tokens for the parser Token is <part of speech, word> Part of speech is a unit in the grammar 2. Parser maps stream of words into a sentence in a grammatical model of the input language

61 Review Three Pass Compiler Front End IR Middle End IR Back End Machine code Errors Parser IR Scanner and Parser collaborate to check the syntax of the input program. 1. Scanner maps stream of characters into words (words are the fundamental unit of syntax) stream of characters Scanner Semantic Elaboration Scanner produces a stream of tokens for the parser Token is <part of speech, word> Part of speech is a unit in the grammar 2. Parser maps stream of words into a sentence in a grammatical model of the input language

62 The Compiler s Front End Scanner looks at every character - Converts stream of characters to stream of tokens, or classified words: <part of speech, word> Parser IR - Efficiency & scalability matter Scanner is only part of compiler that looks at every character Parser looks at every token -Determines if the stream of tokens forms a stream of characters Scanner Semantic Elaboration Front End sentence in the source language -Fits tokens to some syntactic model, or grammar, for the source language

63 The Compiler s Front End Scanner looks at every character - Converts stream of characters to stream of tokens, or classified words: <part of speech, word> Parser IR - Efficiency & scalability matter Scanner is only part of compiler that looks at every character Parser looks at every token -Determines if the stream of tokens forms a stream of characters Scanner Semantic Elaboration Front End sentence in the source language -Fits tokens to some syntactic model, or grammar, for the source language

64 The Compiler s Front End How can we automate the construction of scanner & parsers? Scanner Parser IR - Specify syntax with regular expressions (REs) - Construct finite automaton & scanner from the regular expression stream of characters Scanner Semantic Elaboration Parser - Specify syntax with context-free grammars Front End (CFGs) - Construct push-down automaton & parser from the CFG

65 Example Suppose that we need to recognize the word not : How would we look for it? - Calling some routine? - for example, fscanf(), the Scanner class in Java, or a regex library (Python, Java, C++) - In a compiler, we would build a recognizer (not just calling a routine) Implementation idea - The spelling is n followed by o followed by t - The code shown to the right, is as simple as the description - Cost is O(1) per character - Structure allows for precise error messages Pseudocode: c next character If c = n then { c next character if c = o then { c next character if c = t then return <NOT, not > else report error } else report error } else report error

66 Automata We can represent this code as an automaton c next character If c = n then { c next character if c = o then { c next character if c = t then return <NOT, not > else report error } else report error } else report error s 0 s e n Any character s 1 o O(1) per input character, if branches are O(1) Transition Diagram for not States drawn with double lines indicate success s 2 t s 3

67 Automata To recognize a larger set of keywords is (relatively) easy For example, the set of words { if, int, not, then, to, write } s 2 if s 1 f n s 3 t s 4 int one-to-one map from final states to words (maybe parts of speech) s 0 i n s 5 o s 6 t s 7 not t s 8 h s 9 s 10 s 11 e n then Cost is still O(1) per character w o s 12 to s 13 r s 14 i s 15 t s 16 s 17 e write

68 Specifying An Automaton We list the words Each word s spelling is unique and concise We can separate them with the symbol, read as or The specification: There is some subtlety here. We introduced notation for concatenation and choice (or alternation). Concatenation: ab is a followed by b Alternation: a b is either a or b if int not then to write is a simple regular expression for the language accepted by our automaton

69 Specifying An Automaton Every Regular Expression corresponds to an automaton -We can construct, automatically, an automaton from the RE -We can construct, automatically, an RE from any automaton -We can convert an automaton easily and directly into code -The automaton and the code both have O(1) cost per input character An RE specification leads directly to an efficient scanner

70 Unsigned Integers In the example, each word had one spelling What about a single part of speech with many spellings? Is 0001 allowed? Or 00? Consider specifying an unsigned integer An unsigned integer is any string of digits from the set [0 9] We might want to specify that no leading zeros are allowed is a simple regular expression for the language accepted by our automaton The Automata 0 9 s 0 s 2 With Leading Zeros 0 9 unsigned integer 0 s 0 s 1 1 9 0 9 s 2 Without Leading Zeros unsigned integer unsigned integer How do we write the corresponding RE?

71 Unsigned Integers The Automata 0 9 s 0 s 2 With Leading Zeros 0 9 unsigned integer 0 s 0 s 1 1 9 0 9 s 2 Without Leading Zeros unsigned integer unsigned integer - We need a notation to represent that cyclic edge in the automata - The Kleene Closure x * represents zero or more instances of x - The Positive Closure x + represents one or more instances of x [0-9] + for leading zeros; 0 [1-9] [0-9] * for without leading zeros s 2 x s 0 s 2 x x

72 Unsigned Integers The RE corresponds to an automaton and an implementation 0 s 0 s 1 1 9 0 9 s 2 unsigned integer unsigned integer c next character n 0 if c = 0 then return <CONSTANT,n> else if ( 1 c 9 ) then { n atoi(c) c next character while ( 0 c 9 ) { t atoi(c) n n * 10 + t } return <CONSTANT,n> } else report error The automaton and the code can be generated automatically

73 Regular Expression A formal defintion Regular Expressions over an Alphabet Σ If x Σ, then x is an RE denoting the set { x } or the language L = { x } If x and y are REs then xy is an RE denoting L(x)L(y) = { pq p L(x) and q L(y) } x y is an RE denoting L(x) L(y) x * is an RE denoting L(x) * = 0 k < L(x) k (Kleene Closure) Set of all strings that are zero or more concatenations of x x + is an RE denoting L(x) + = 1 k < L(x) k (Positive Closure) Set of all strings that are one or more concatenations of x ε is an RE denoting the empty set

74 Regular Expression How do these operators help? Regular Expressions over an Alphabet Σ If x is in Σ, then x is an RE denoting the set { x } or the language L = { x } The spelling of any letter in the alphabet is an RE If x and y are REs then - xy is an RE denoting L(x)L(y) = { pq p L(x) and q L(y) } If we concatenate letters, the result is an RE (spelling of words) - x y is an RE denoting L(x) L(y) Any finite list of words can be written as an RE, ( w0 w1 w2 wn ) - x * is an RE denoting L(x) * = 0 k < L(x) k - x + is an RE denoting L(x) + = 1 k < L(x) k We can use closure to write finite descriptions of infinite, but countable, sets ε is an RE denoting the empty set In practice, the empty string is often useful

75 Regular Expression Let the notation [0-9] be shorthand for (0 1 2 3 4 5 6 7 8 9) RE Examples: Non-negative integer [0-9] [0-9] * or [0-9] + No leading zeros 0 [1-9] [0-9] * Algol-style Identifier ([a-z] [A-Z]) ([a-z] [A-Z] [0-9]) * Decimal number 0 [1-9] [0-9] *. [0-9] * Real number ((0 [1-9] [0-9] * ) (0 [1-9] [0-9] *. [0-9] * ) E [0-9] [0-9] * Each of these REs corresponds to an automaton and an implementation.

76 From RE to Scanner We can use results from automata theory to construct scanners directly from REs - There are several ways to perform this construction - Classic approach is a two-step method. - 1. Build automata for each piece of the RE using a simple template-driven method Build a specific variation on an automaton that has transitions on ε and non-deterministic choice (multiple transitions from a state on the same symbol) This construction is called Thompson s construction - 2. Convert the newly built automaton into a deterministic automaton Deterministic automaton has no ε-transitions and all choices are single-valued This construction is called the subset construction - Given the deterministic automaton, we can run a minimization algorithm on it to reduce the number of states - Minimization is a space optimization. Both the original automaton and the minimal one take O(1) time per character

77 Thompson s Construction The Key Idea - For each RE symbols and operator, we have a small template - Build them, in precedence order, and join them with ε-transitions a S 0 S 1 a S 0 S 1 e b S 3 S 4 NFA for a NFA for ab S 0 e e a S 1 S 2 b S 3 S 4 e e S 5 e S 0 S 1 e a e e S 3 S 4 NFA for a b NFA for a * Precedence in REs: Parentheses, then closure, then concatenation, then alternation

78 Thompson s Construction Let s build an NFA for a ( b c )* 1. a, b, & c a S 0 S 1 b S 0 S 1 2. b c e b S 1 S 2 e c S 0 S 1 e S 0 S 5 e c S 3 S 4 e 3. ( b c ) * e e b S 2 S 3 S 0 S 1 S 6 S 7 e c S 4 S 5 e e e e e 4. a ( b c ) * a S 0 S 1 e e e b S 4 S 5 S 2 S 3 S 8 S 9 e c S 6 S 7 e e e e Precedence in REs: Parentheses, then closure, then concatenation, then alternation

79 Subset Construction The Concept - Build a simpler automaton (no ε-transitions, no multi-valued transitions) that simulates the behavior of the more complex automaton - Each state in the new automaton represents a set of states in the original NFA e a n 0 n 1 e a ( b c )* e e b n 4 n 5 n 2 n 3 n 8 n 9 e c n 6 n 7 e e e e DFA b DFA NFA b a d 0 d 1 c b d 2 c d0 d1 n 0 n 1 n 2 n 3 n 4 n 6 n 9 d 3 d2 n 5 n 8 n 9 n 3 n 4 n 6 c d3 n 7 n 8 n 9 n 3 n 4 n 6

80 Minimization DFA minimization algorithms work by discovering states that are equivalent in their contexts and replacing multiple states with a single one -Minimization reduces the number of states, but does not change the costs b b a d 0 d 1 c b d 2 c a s 0 s 1 b c Minimal DFA State Original DFA State s0 d0 s1 d1, d2, d3 d 3 c

81 Implementing an Automaton A common strategy is to simulate the DFA s execution - Skeleton parser + a table that encores the automaton - The scanner generator constructs the table - The skeleton parser does not change (much) state s 0 char NextChar( ) while (char ¹ EOF) { state d[state,char] char NextChar( ) } if (state is a final state) then report success else report an error Transition table for our minimal DFA δ a b c s0 s1 se se s1 se s1 s1 Simple Skeleton Scanner

82 Automatic Scanner Construction Scanner Generator - Tasks in a specification written as a collection of regular expressions - Combines them into one RE using alternation ( ) - Builds the minimal automaton ( 2.4, 2.6.2 EAC) - Emits the tables to drive a skeleton scanner ( 2.5 EAC) Source Code Skeleton Scanner Tables <part of speech, word> pairs Specifications (as RE) Scanner Generator

83 Automatic Scanner Construction Scanner Generator - As alternative, the generator can produce code rather than tables - Direct-coded scanners are ugly, but often faster table-driven scanners - Other than speed, the two are equivalent Source Code Skeleton Scanner Tables <part of speech, word> pairs Specifications (as RE) Scanner Generator

84 Summary Automating Scanner Construction - Write down the REs for the input language and connect them with - Build a big, simple NFA - Build the DFA that simulates the NFA - Minimize the number of states in the DFA - Generate an implementation from the minimal DFA Scanner Generators - lex, flex, jflex, and such all work along the lines - Algorithms are well-known and well-understood - Key issues are: finding longest match rather than first match, and engineering the interface to the parser - You can build your own scanner generator Find the longest match is where O(1) per character can turn into O(n). More next lecture.

85 Reading Parallel Prefix Sum (Scan) with CUDA, GPU Gem3, Chapter 39, by Mark Harris, Shubhabrata Sengupta, and John Owens. http://http.developer.nvidia.com/gpugems3/gpugems3_ch39.html Performance Optimization by Paulius Micikevicius, NVIDIA http://www.nvidia.com/docs/io/116711/sc11-perf-optimization.pdf CUDA C/C++ Basics http://www.nvidia.com/docs/io/116711/sc11-cuda-c-basics.pdf Loop unrolling for GPU Computing http://www.nvidia.com/docs/io/116711/sc11-unrolling-parallel-loops.pdf CUDA Programming Guide, Chapter 1-5. Book: Engineering a Compiler (EAC) Chapter 2.2 Chapter 2.4 Chapter 2.5 Chapter 2.6