Syntax Analysis. The Big Picture. The Big Picture. COMP 524: Programming Languages Srinivas Krishnan January 25, 2011

Similar documents
Syntax Analysis. COMP 524: Programming Language Concepts Björn B. Brandenburg. The University of North Carolina at Chapel Hill

Parsing. Roadmap. > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing

8 Parsing. Parsing. Top Down Parsing Methods. Parsing complexity. Top down vs. bottom up parsing. Top down vs. bottom up parsing

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

Top down vs. bottom up parsing

3. Parsing. Oscar Nierstrasz

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

programming languages need to be precise a regular expression is one of the following: tokens are the building blocks of programs

Wednesday, September 9, 15. Parsers

Parsers. What is a parser. Languages. Agenda. Terminology. Languages. A parser has two jobs:

CSE 3302 Programming Languages Lecture 2: Syntax

Parsing. Note by Baris Aktemur: Our slides are adapted from Cooper and Torczon s slides that they prepared for COMP 412 at Rice.

COP 3402 Systems Software Top Down Parsing (Recursive Descent)

EDAN65: Compilers, Lecture 04 Grammar transformations: Eliminating ambiguities, adapting to LL parsing. Görel Hedin Revised:

CS1622. Today. A Recursive Descent Parser. Preliminaries. Lecture 9 Parsing (4)

Lexical and Syntax Analysis. Top-Down Parsing

Syntax. Syntax. We will study three levels of syntax Lexical Defines the rules for tokens: literals, identifiers, etc.

Syntactic Analysis. Top-Down Parsing

Lexical and Syntax Analysis

Lexical Analysis. COMP 524, Spring 2014 Bryan Ward

Compilers. Yannis Smaragdakis, U. Athens (original slides by Sam

Languages and Compilers

CS 315 Programming Languages Syntax. Parser. (Alternatively hand-built) (Alternatively hand-built)

Theoretical Part. Chapter one:- - What are the Phases of compiler? Answer:

Abstract Syntax Trees & Top-Down Parsing

Prelude COMP 181 Tufts University Computer Science Last time Grammar issues Key structure meaning Tufts University Computer Science

Parsers. Xiaokang Qiu Purdue University. August 31, 2018 ECE 468

Building Compilers with Phoenix

CS415 Compilers. Syntax Analysis. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Monday, September 13, Parsers

CIT Lecture 5 Context-Free Grammars and Parsing 4/2/2003 1

CS 406/534 Compiler Construction Parsing Part I

Parsing II Top-down parsing. Comp 412

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

Abstract Syntax Trees & Top-Down Parsing

Abstract Syntax Trees & Top-Down Parsing

Syntax Analysis. Martin Sulzmann. Martin Sulzmann Syntax Analysis 1 / 38

Wednesday, August 31, Parsers

Lecture 4: Syntax Specification

Lexical and Syntax Analysis (2)

CMSC 330: Organization of Programming Languages

Building a Parser III. CS164 3:30-5:00 TT 10 Evans. Prof. Bodik CS 164 Lecture 6 1

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7

A programming language requires two major definitions A simple one pass compiler

Introduction to Parsing. Comp 412

A language is a subset of the set of all strings over some alphabet. string: a sequence of symbols alphabet: a set of symbols

Types of parsing. CMSC 430 Lecture 4, Page 1

Chapter 3: CONTEXT-FREE GRAMMARS AND PARSING Part2 3.3 Parse Trees and Abstract Syntax Trees

Ambiguity, Precedence, Associativity & Top-Down Parsing. Lecture 9-10

Syntax Analysis Part I

EECS 6083 Intro to Parsing Context Free Grammars

Syntax Analysis, III Comp 412

Compilers. Predictive Parsing. Alex Aiken

Chapter 3: CONTEXT-FREE GRAMMARS AND PARSING Part 1

3. Context-free grammars & parsing

Chapter 2 - Programming Language Syntax. September 20, 2017

Administrativia. WA1 due on Thu PA2 in a week. Building a Parser III. Slides on the web site. CS164 3:30-5:00 TT 10 Evans.

Note that for recursive descent to work, if A ::= B1 B2 is a grammar rule we need First k (B1) disjoint from First k (B2).

Compilerconstructie. najaar Rudy van Vliet kamer 140 Snellius, tel rvvliet(at)liacs(dot)nl. college 3, vrijdag 22 september 2017

Lecturer: William W.Y. Hsu. Programming Languages

Dr. D.M. Akbar Hussain

Parsing III. (Top-down parsing: recursive descent & LL(1) )

Section A. A grammar that produces more than one parse tree for some sentences is said to be ambiguous.

Chapter 2 :: Programming Language Syntax

MIT Specifying Languages with Regular Expressions and Context-Free Grammars

Ambiguity. Grammar E E + E E * E ( E ) int. The string int * int + int has two parse trees. * int

Syntax Analysis, III Comp 412

A Simple Syntax-Directed Translator

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Parsing. source code. while (k<=n) {sum = sum+k; k=k+1;}

CPS 506 Comparative Programming Languages. Syntax Specification

JavaCC Parser. The Compilation Task. Automated? JavaCC Parser

Table-driven using an explicit stack (no recursion!). Stack can be viewed as containing both terminals and non-terminals.

MIT Specifying Languages with Regular Expressions and Context-Free Grammars. Martin Rinard Massachusetts Institute of Technology

CA Compiler Construction


Syntax Analysis/Parsing. Context-free grammars (CFG s) Context-free grammars vs. Regular Expressions. BNF description of PL/0 syntax

CS 314 Principles of Programming Languages

COMP 181. Prelude. Next step. Parsing. Study of parsing. Specifying syntax with a grammar

The Parsing Problem (cont d) Recursive-Descent Parsing. Recursive-Descent Parsing (cont d) ICOM 4036 Programming Languages. The Complexity of Parsing

Part 3. Syntax analysis. Syntax analysis 96

CSCI312 Principles of Programming Languages

COP 3402 Systems Software Syntax Analysis (Parser)

1 Introduction. 2 Recursive descent parsing. Predicative parsing. Computer Language Implementation Lecture Note 3 February 4, 2004

Architecture of Compilers, Interpreters. CMSC 330: Organization of Programming Languages. Front End Scanner and Parser. Implementing the Front End

Compiler construction lecture 3

Parsing Algorithms. Parsing: continued. Top Down Parsing. Predictive Parser. David Notkin Autumn 2008

Formal Languages and Grammars. Chapter 2: Sections 2.1 and 2.2

Chapter 4. Lexical and Syntax Analysis

Compiler Construction 2016/2017 Syntax Analysis

CSC 4181 Compiler Construction. Parsing. Outline. Introduction

Chapter 4. Lexical and Syntax Analysis. Topics. Compilation. Language Implementation. Issues in Lexical and Syntax Analysis.

CMSC 330: Organization of Programming Languages

Context-Free Grammar. Concepts Introduced in Chapter 2. Parse Trees. Example Grammar and Derivation

CMSC 330: Organization of Programming Languages. Architecture of Compilers, Interpreters

This book is licensed under a Creative Commons Attribution 3.0 License

Introduction to Parsing

CMSC 330: Organization of Programming Languages

4 (c) parsing. Parsing. Top down vs. bo5om up parsing

Transcription:

Syntax Analysis COMP 524: Programming Languages Srinivas Krishnan January 25, 2011 Based in part on slides and notes by Bjoern Brandenburg, S. Olivier and A. Block. 1 The Big Picture Character Stream Token Stream Parse Tree Abstract syntax tree Modified inediate form Machine language Modified target language Scanner (lexical analysis) Parser (syntax analysis) Semantic analysis & inediate code gen. Machine-independent optimization (optional) Target code generation. Machine-specific optimization (optional) Symbol Table 2 2 The Big Picture Character Stream Token Stream Parse Tree Abstract syntax tree Modified inediate form Scanner (lexical analysis) Parser (syntax analysis) Semantic analysis & inediate code gen. Machine-independent optimization (optional) Syntax Analysis: Discovery of Program Structure Turn the stream of individual Target input code generation. tokens into a Machine language complete, hierarchical Machine-specific representation optimization of the program (or compilation (optional) unit). Modified target language Symbol Table 3 3

Syntax Specification and Parsing Syntax Specification How can we succinctly describe the structure of legal programs? Syntax Recognition How can a compiler discover if a program conforms to the specification? Context-free Grammars LL and LR Parsers 4 4 Review: grammar. Context-Free Grammars Collection of productions. regular grammar + recursion A production defines a non-inal (on the left, the head ) in s of a string inal and non-inal symbols. Terminal symbols are elements of the alphabet of the grammar. A non-inal can be the head of multiple productions. Example: Natural Numbers digit 0 1 2 3 4 5 6 7 8 9 non_zero_digit 1 2 3 4 5 6 7 8 9 natural_number non_zero_digit digit* 5 5 Context-Free Grammars regular grammar + recursion Review: grammar. Regular grammars. Collection Restriction: of no productions. unrestricted recursion. A non-inal symbol cannot be defined in s of itself. A production defines a non-inal (on the left, the head ) in (except for special cases that equivalent to a Kleene Closure) s Serious of limitation: a string inal e.g., cannot and ess non-inal matching symbols. parenthesis. Terminal symbols are elements of the alphabet of the grammar. Example: Natural Numbers digit 0 1 2 3 4 5 6 7 8 9 non_zero_digit 1 2 3 4 5 6 7 8 9 natural_number non_zero_digit digit* 6 6

Context-Free Grammars regular grammar + recursion Context-free Grammars (CFGs) allow recursion. Arithmetic ession with parentheses id number - ( ) op op + - * / 7 7 Context-Free Grammars regular grammar + recursion Recursion An ession Context-free is Grammars a a minus sign (CFGs) allow recursion. followed by an ession. Arithmetic ession with parentheses id number - ( ) op op + - * / 8 8 Context-Free Grammars regular grammar + recursion Can ess matching parenthesis requirement. Context-free Grammars (CFGs) allow recursion. Arithmetic ession with parentheses id number - ( ) op op + - * / 9 9

Context-Free Grammars regular grammar + recursion Key difference to lexical grammar: Context-free inal symbols Grammars are (CFGs) tokens, allow recursion. not individual characters. Arithmetic ession with parentheses id number - ( ) op op + - * / 10 10 Context-Free Grammars regular grammar + recursion One of the non-inals, usually the first one, is called the start symbol, and it defines the construct defined by the grammar. Context-free Grammars (CFGs) allow recursion. Arithmetic ession with parentheses id number - ( ) op op + - * / 11 11 BNF vs. EBNF Backus-Naur-Form (BNF) Originally developed for ALGOL 58/60 reports. Textual notation for context-free grammars. id number - ( ) op is written as <> ::= id number - <> ( <> ) <> <op> <> 12 12

BNF vs. EBNF Backus-Naur-Form (BNF) Originally developed for ALGOL 58/60 reports. Textual notation for context-free grammars. id number - ( ) op is written as <> ::= id number - <> ( <> ) <> <op> <> Strictly speaking, it does not include the Kleene Star and similar notational sugar. 13 13 BNF vs. EBNF Extended Backus-Naur-Form (EBNF) Many authors extend BNF to simplify grammars. One of the first to do so was Niklaus Wirth. There exists an ISO standard for EBNF (ISO/IEC 14977). But many dialects exist. 14 14 BNF vs. EBNF Extended Backus-Naur-Form (EBNF) Many authors extend BNF to simplify grammars. One of the first to do so was Niklaus Wirth. There exists an ISO standard for EBNF (ISO/IEC 14977). But many dialects exist. Features Terminal symbols are quoted. Use of = instead of ::= to denote. Use of, for concatenation. [A] means A can occur optionally (zero or one time). {A} means A can occur repeatedly (Kleene Star). Parenthesis are allowed for grouping. And then some 15 15

BNF vs. EBNF Extended Backus-Naur-Form (EBNF) Many authors extend BNF to simplify grammars. One of the first to so was Niklaus Wirth. There exists an ISO standard for EBNF (ISO/IEC 14977). We will use mostly BNF-like grammars with the But many dialects exist. addition of the Kleene Star, ε, and parenthesis. Features Terminal symbols are quoted. Use of = instead of ::= to denote. Use of, for concatenation. [A] means A can occur optionally (zero or one time). {A} means A can occur repeatedly (Kleene Star). Parenthesis are allowed for grouping. And then some 16 16 Example: EBNF to BNF Conversion id_list id (, id )* is equivalent to id_list id id_list id_list, id (Remember that non-inals can be the head of multiple productions.) 17 17 Derivation A grammar allows programs to be derived. Productions are rewriting rules. A program is syntactically correct if and only if it can be derived from the start symbol. Derivation Process Begin with string consisting only of start symbol. while string contains a non-inal symbol: Choose one non-inal symbol X. Choose production where X is the head. Replace X with right-hand side of production. 18 18

Derivation If A we grammar always allows choose programs the left-most to be derived. non-inal Productions symbol, are rewriting then rules. it is called Program a left-most is syntactically derivation. correct if and only if it can be derived from the start symbol. Derivation Process Begin with string consisting only of start symbol. while string contains a non-inal symbol: Choose one non-inal symbol X. Choose production where X is the head. Replace X with right-hand side of production. 19 19 Derivation A grammar allows programs to be derived. Productions are rewriting rules. A program is syntactically If we always correct choose if and the only right-most if it can be derived from the start symbol. non-inal symbol, then it is called Derivation Process a right-most or canonical derivation. Begin with string consisting only of start symbol. while string contains a non-inal symbol: Choose one non-inal symbol X. Choose production where X is the head. Replace X with right-hand side of production. 20 20 Example Derivation Arithmetic grammar: id number - ( ) op op + - * / Program slope * x + intercept 21 21

Example Derivation Arithmetic grammar: id number - ( ) op op + - * / Program slope * x + intercept op 22 22 Example Derivation Arithmetic grammar: id number - ( ) op op + - * / Program slope * x + intercept denotes derived from op 23 23 Example Derivation Arithmetic grammar: id number - ( ) op op + - * / Program slope * x + intercept op op id 24 24

Example Derivation Arithmetic grammar: id number - ( ) op op + - * / Program slope * x + intercept op op id + id 25 25 Example Derivation Arithmetic grammar: id number - ( ) op op + - * / Program slope * x + intercept op + id op op id + id 26 26 Example Derivation Arithmetic grammar: id number - ( ) op op + - * / Program slope * x + intercept op + id op id + id op op id + id 27 27

Example Derivation Arithmetic grammar: id number - ( ) op op + - * / Program slope * x + intercept op op + id op id + id * id + id op id + id 28 28 Example Derivation Arithmetic grammar: id number - ( ) op op + - * / Program slope * x + intercept op op id op + id op id + id * id + id id * id + id + id 29 29 Example Derivation Arithmetic grammar: id number - ( ) op op + - * / Substitute values of identifier tokens. Program slope * x + intercept op op id op + id op id + id * id + id id * id + id + id slope * x + intercept 30 30

Example Derivation Arithmetic grammar: id number - ( ) op op + - * / This is a right-most derivation. Program slope * x + intercept op op id op + id op id + id * id + id id * id + id + id slope * x + intercept 31 31 Parse Tree A parse tree is a hierarchical representation of the derivation that does not show the derivation order. op op + id (intercept) id (slope) * id (x) 32 32 Parse Tree A parse tree is a hierarchical representation of the derivation that does not show the derivation order. Properties op Each interior node is a non-inal symbol. Its children are the right-hand side of the production that it was replaced with. op + Leaf nodes are inal id (intercept) symbols (tokens). Many-to-one: many derivations can id (slope) * id (x) yield identical parse trees. The parse tree defines the structure of the program. 33 33

Parse Tree This parse tree represents the formula slope * x + intercept. op op + id (intercept) id (slope) * id (x) 34 34 Parse Tree Letʼs do a left-most derivation of slope * x + intercept. Arithmetic grammar: id number - ( ) op op + - * / 35 35 Parse Tree Letʼs do a left-most derivation of slope * x + intercept. op id (slope) * op id (x) + id (intercept) 36 36

Parse Tree (Ambiguous) This grammar, is ambiguous and can construct the following This parse tree represents the formula slope * (x + intercept), parse tree. which is not equal to slope * x + intercept. op id (slope) * op id (x) + id (intercept) 37 37 Parse Tree (Ambiguous) This grammar, is ambiguous and can construct the following This parse tree represents the formula slope * (x + intercept), parse tree. which is not equal to slope * x + intercept. Ambiguity The parse tree defines the structure of the program. A program should have only op one valid interpretation! Two solutions: Make grammar unambiguous, i.e., id ensure (slope) that all derivations * yield op identical parse trees. Provide disambiguating rules. id (x) + id (intercept) 38 38 Disambiguating the Grammar The problem with our original grammar is that it does not fully ess the grammatical structure (i.e., associativity and precedence). To create an unambiguous grammar, we need to fully specify the grammar and differentiate between s and s. 39 39

Disambiguating the Grammar The problem with our original grammar is that it does not fully ess the grammatical structure (i.e., associativity and precedence). To create an unambiguous grammar, we need to fully specify the grammar and differentiate between s and s. add_op mult_op id number - ( ) add_op + - mult_op * / 40 40 Disambiguating the Grammar The problem with our original grammar is that it does not fully ess the grammatical structure (i.e., associativity and precedence). To create an This unambiguous gives precedence grammar, we to need multiply. to fully specify the grammar and differentiate between s and s. add_op mult_op id number - ( ) add_op + - mult_op * / 41 41 Example Parse Tree 3+4*5 add_op + mul_op * number(5) number (3) number (4) 42 42

Example Parse Tree Multiplication precedes addition. 3+4*5 add_op + mul_op * number(5) number (3) number (4) 43 43 Another Example Lets try deriving 3*4+5*6+7. add_op mult_op id number - ( ) add_op + - mult_op * / 44 44 3*4+5*6+7 Parse Tree add_op add_op + + mul_op 7 mul_op * 6 * 4 5 3 45 45

Parser The purpose of the parser is to construct the parse tree that corresponds to the input token stream. (If such a tree exists, i.e., for correct input.) 46 46 Parser The purpose of the parser is to construct the parse tree that corresponds to the input token stream. (If such a tree exists, i.e., for correct input.) This is a non-trivial problem: for example, consider 3*4 and 3+4. 47 47 3+4 3*4 add_op + mul_op 4 * 4 3 3 48 48

3+4 3*4 add_op + mul_op 4 * 4 3 3 How can a computer derive these trees by examining one token at a time? 49 49 3+4 3*4 add_op + mul_op 4 * 4 3 3 In order to derive these trees, the first character that we need to examine is the math operator in the middle. 50 50 3+4 3*4 add_op + mul_op 4 * 4 3 3 Writing ad-hoc parsers is difficult, tedious, and error-prone. 51 51

Complexity of Parsing Arbitrary CFGs can be parsed in O(n 3 ) time. n is length of the program (in tokens). Earley s algorithm. Cocke-Younger-Kasami (CYK) algorithm. This is too inefficient for most purposes. Efficient parsing is possible. There are (restricted) types of grammars that can be parsed in linear time, i.e., O(n). Two important classes: LL: Left-to-right, Left-most derivation LR: Left-to-right, Right-most derivation These are sufficient to ess most programming languages. 52 52 Complexity of Parsing Arbitrary CFGs can be parsed in O(n 3 ) time. The n is class length of the all program grammars (in tokens). for which a left-most derivation always yields a parse tree. Earley s algorithm. Cocke-Younger-Kasami (CYK) algorithm. This is too inefficient for most purposes. Efficient parsing is possible. There are (restricted) types of grammars that can be parsed in linear time, i.e., O(n). Two important classes: LL: Left-to-right, Left-most derivation LR: Left-to-right, Right-most derivation These are sufficient to ess most programming languages. 53 53 Complexity of Parsing Arbitrary CFGs can be parsed in O(n 3 ) time. n is length of the program (in tokens). Earley s algorithm. The class of all grammars for which a right-most Cocke-Younger-Kasami (CYK) algorithm. derivation always yields a parse tree. This is too inefficient for most purposes. Efficient parsing is possible. There are (restricted) types of grammars that can be parsed in linear time, i.e., O(n). Two important classes: LL: Left-to-right, Left-most derivation LR: Left-to-right, Right-most derivation These are sufficient to ess most programming languages. 54 54

LL-Parsers vs. LR-Parsers LL-Parsers Find left-most derivation. Create parse-tree in top-down order, beginning at the root. Can be either constructed manually or automatically generated with tools. Easy to understand. LL grammars sometimes appear unnatural. Also called predictive parsers. LR-Parsers Find right-most derivation. Create parse-tree in bottom-up order, beginning at leaves. Are usually generated by tools. Operating is less intuitive. LR grammars are often natural. Also called shift-reduce parsers. Strictly more essive: every LL grammar is also an LR grammar, but the converse is not true. Both are used in practice. We focus on LL. 55 55 LL vs. LR Example A simple grammar for a list of identifiers. Input A, B, C; 56 56 LL Example the (as of yet empty) parse tree A, B, C ; current token 57 57

LL Example id_list A, B, C ; current token Step 1 Begin with root (start symbol). 58 58 LL Example id_list id (A) id_list_tail A, B, C ; current token Step 2 Apply id_list production. This matches the first identifier to the expected id token. 59 59 LL Example id_list id (A) id_list_tail A, B, C ;, id (B) id_list_tail current token Step 3 Apply a production for id_list_tail. There are two to choose from. Predict that the first one applies. This matches two more tokens. 60 60

LL Example id_list id (A) id_list_tail A, B, C ;, id (B) id_list_tail, id (C) id_list_tail current token Step 4 Substitute the id_list_tail, predicting the first production again. This matches a comma and C. 61 61 LL Example id_list id (A) id_list_tail A, B, C ;, id (B) id_list_tail, id (C) id_list_tail ; current token Step 5 Substitute the final id_list_tail. This time predict the other production, which matches the ʻ;ʼ. 62 62 LL Parse Tree id_list id (A) id_list_tail, id (B) id_list_tail Top-Down Construction A, B, C ;, id (C) id_list_tail ; 63 63

LL Parse Tree id_list Top-Down Construction id (A) id_list_tail A, B, C ;, id (B) id_list_tail, id (C) id_list_tail Notice that the input tokens are placed in the tree from the left to right. ; 64 64 LR Example A, B, C ; forest (a stack) 65 65 LR Example Forest = set of (partial) trees. A, B, C ; forest (a stack) 66 66

LR Example Step 1 Shift encountered token into forest. A, B, C ; current token id (A) forest (a stack) 67 67 LR Example Step 2 Deine that no right-hand side of any production matches the top of the forest. Shift next token into forest. A, B, C ; current token id (A), forest (a stack) 68 68 LR Example Steps 3-6 No right hand side matches top of forest. Repeatedly shift next token into forest. A, B, C ; current token id (A), id (B), id (C) ; forest (a stack) 69 69

LR Example Step 7 Detect that last production matches the top of the forest. Reduce top token to partial tree. A, B, C ; current token id (A), id (B), id (C) id_list_tail ; forest (a stack) 70 70 LR Example Step 8 Detect that second production matches. Reduce top of forest. A, B, C ; id (A), id (B) id_list_tail current token id_list_tail, id (C) ; forest (a stack) 71 71 LR Example Step 9 Detect that second production matches. Reduce top of forest. id (A) id_list_tail A, B, C ; id_list_tail current token, id (B) id_list_tail, id (C) ; forest (a stack) 72 72

LR Example id (A) id_list id_list_tail A, B, C ; id_list_tail current token, id (B) id_list_tail forest (a stack), id (C) ; Step 10 Detect that first production matches. Reduce top of forest. 73 73 LR Parse Tree id (A) id_list id_list_tail Bottom-Up Construction A, B, C ; id_list_tail, id (B) id_list_tail, id (C) ; The problem with this grammar is that it can require an arbitrarily large number of inals to be shifted before reduction takes place. 74 74 An Equivalent Grammar better suited to LR parsing id_list id_list_prefix ; id_list_prefix id_list_prefix, id id_list_prefix id This grammar limits the number of suspended non-inals. 75 75

An Equivalent Grammar better suited to LR parsing id_list id_list_prefix ; id_list_prefix id_list_prefix, id id_list_prefix id However, this creates a problem for the LL parser. When the parser discovers an id it cannot predict the number of id_list_prefix productions that it needs to match. 76 76 Two Approaches to LL Parser Construction Recursive Descent. A mutually recursive set of subroutines. One subroutine per non-inal. Case statements based on current token to predict subsequent productions. Table-Driven. Not recursive; instead has an explicit stack of expected symbols. A loop that processes the top of the stack. Terminal symbols on stack are simply matched. Non-inal symbols are replaced with productions. Choice of production is driven by table. 77 77 Recursive Descent Example recursive descent = climb from root to leaves, calling a subroutine for every level Identifier List Grammar. Recall our LL-compatible original version. Recursive Descent Approach. We need one subroutine for each non-inal. Each subroutine adds tokens into the growing parse tree and/or calls further subroutines to resolve non-inals. 78 78

Recursive Descent Example recursive descent = climb from root to leaves, calling a subroutine for every level Identifier List Grammar. Recall our LL-compatible original version. Possibly itself. Recursive descent: id_list_tail,id either id_list_tail directly or indirectly. Recursive Descent Approach. We need one subroutine for each non-inal. Each subroutine adds tokens into the growing parse tree and/or calls further subroutines to resolve non-inals. 79 79 Recursive Descent Example Helper routine match. Used to consume expected inals/ tokens. Given an expected token type (e.g., id, ;, or, ), checks if next token is of correct type. Raises error otherwise. subroutine match(expected_type): token = get_next_token() if (token.type == expected_type): make token a child of left-most non-inal in tree else: throw ParseException( expected + expected_type) 80 80 Recursive Descent Example Helper routine match. Example of a match failure: Used to consume expected inals/ tokens. Given an expected token type (e.g., id, ;, or, ), checks if next token is of correct type. Raises error otherwise. subroutine match(expected_type): Eclipse: class token expected, but got id token = get_next_token() if (token.type == expected_type): make token a child of left-most non-inal in tree else: throw ParseException( expected + expected_type) 81 81

Recursive Descent Example Parsing id_list. Trivial, there is only one production. Simply match an id, and then delegate parsing of the tail to the subroutine for id_list_tail. subroutine parse_id_list(): match(id_token) parse_id_list_tail() 82 82 Recursive Descent Example Parsing id_list. Trivial, there is only one production. Simply This match delegation an id, and is then the delegate descent parsing part of in the recursive tail to the subroutine descent parsing. for id_list_tail. subroutine parse_id_list(): match(id_token) parse_id_list_tail() 83 83 Recursive Descent Example Parsing id_list_tail. There are two productions to choose from. This require predicting which one is the correct one. This requires looking ahead and examining the next token (without consuming it). subroutine parse_id_list_tail(): type = peek_at_next_token_type() case type of COMMA_TOKEN: match(comma_token); match(id_token); parse_id_list_tail() SEMICOLON_TOKEN: match(semicolon_token); 84 84

Recursive Descent Example Parsing id_list_tail. There are two productions to choose from. This require predicting which one is the correct one. This delegation id_list_tail is the recursive ; This requires looking ahead and part in recursive descent parsing. examining the next token (without consuming it). subroutine parse_id_list_tail(): type = peek_at_next_token_type() case type of COMMA_TOKEN: match(comma_token); match(id_token); parse_id_list_tail() SEMICOLON_TOKEN: match(semicolon_token); 85 85 Parsing id_list_tail. Recursive Descent Example There are two productions to choose from. This require predicting which one is the correct one. This requires looking ahead and examining the next token (without consuming it). We need one token lookahead. Parsers that require k tokens lookahead are called LL(k) (or LR(k)) parsers. Thus, this is a LL(1) parser. subroutine parse_id_list_tail(): type = peek_at_next_token_type() case type of COMMA_TOKEN: match(comma_token); match(id_token); parse_id_list_tail() SEMICOLON_TOKEN: match(semicolon_token); 86 86 LL(k) Parsers Recall our non-ll compatible grammar. Better for LR-parsing, but problematic for predictive parsing. id_list id_list_prefix ; id_list_prefix id_list_prefix, id id_list_prefix id Cannot be parsed by LL(1) parser. Cannot predict which id_list_production to choose if next token is of type id. However, a LL(2) parser can parse this grammar. Just look at the second token ahead and disambiguate based on, vs. ;. 87 87

LL(k) Parsers Recall our non-ll compatible grammar. Better for LR-parsing, but problematic for predictive parsing. Bottom-line: id_list id_list_prefix ; can enlarge class of supported grammars by using id_list_prefix k > 1 lookahead, id_list_prefix, but at idthe expense of reduced performance / backtracking. id_list_prefix id Most production LL parsers use k = 1. Cannot be parsed by LL(1) parser. Cannot predict which id_list_production to choose if next token is of type id. However, a LL(2) parser can parse this grammar. Just look at the second token ahead and disambiguate based on, vs. ;. 88 88 Predict Sets subroutine parse_id_list_tail(): type = peek_at_next_token_type() case type of COMMA_TOKEN: match(comma_token); match(id_token); parse_id_list_tail() SEMICOLON_TOKEN: match(semicolon_token); The question is how do we label the case statements in general, i.e., for arbitrary LL grammars? 89 89 FIRST(A): First, Follow, and Predict The inals that can be the first token of a valid derivation starting with symbol A. Trivially, for each inal T, FIRST(T) = {T}. FOLLOW(A): The inals that can follow the symbol A in any valid derivation. (A is usually a non-inal.) PREDICT(A α): sets of inal symbols The inals that can be the first tokens as a result of the production A α. (α is a string of symbols) The inals in this set form the label in the case statements to predict A α. 90 90

First, Follow, and Predict FIRST(A): Note: For a non-inal A, the set FIRST(A) is the union of The inals (including ε) that can be the first token of a valid derivation the predict starting sets with of all symbol productions A. with A as the head: if Trivially, there exist for each three inal productions T, FIRST(T) A α, = A {T}. β, and A λ, then FOLLOW(A): FIRST(A) = The inals that can follow the symbol A in any valid PREDICT(A α) PREDICT(A β) PREDICT(A λ) derivation. PREDICT(A α): sets of inal symbols The inals that can be the first tokens as a result of the production A α. (α is a string of symbols) The inals in this set form the label in the case statements to predict A α. 91 91 PREDICT(A α) If α is ε, i.e., if A is derived to nothing : PREDICT(A ε) = FOLLOW(A) Otherwise, if α is a string of symbols that starts with X: PREDICT(A X ) = FIRST(X) 92 92 Inductive Definition of FIRST(A) If A is a inal symbol, then: FIRST(A) = {A} If A is a non-inal symbol and there exists a production A X, then FIRST(X) FIRST(A) (X can be inal or non-inal) 93 93

Inductive Definition of FIRST(A) Notation: X is the first symbol of the production body. If A is a inal symbol, then: FIRST(A) = {A} If A is a non-inal symbol and there exists a production A X, then FIRST(X) FIRST(A) (X can be inal or non-inal) 94 94 Inductive Definition of FOLLOW(A) If the substring AX exists anywhere in the grammar, then FIRST(X) FOLLOW(A) If there exists a production X A, then FOLLOW(X) FOLLOW(A) 95 95 Inductive Definition of FOLLOW(A) Notation: A is the last symbol of the production body. If the substring AX exists anywhere in the grammar, then FIRST(X) FOLLOW(A) If there exists a production X A, then FOLLOW(X) FOLLOW(A) 96 96

Computing First, Follow, and Predict Inductive Definition. FIRST, FOLLOW, and PREDICT are defined in s of each other. Exception: FIRST for inals. This the base case for the induction. Iterative Computation. Start with FIRST for inals and set all other sets to be empty. Repeatedly apply all definitions (i.e., include known subsets). Terminate when sets do not change anymore. 97 97 Predict Set Example id_list id_list_prefix; [1] [2] id_list_prefix id_list_prefix, id id_list_prefix id 98 98 Predict Set Example id_list id_list_prefix; [1] [2] id_list_prefix id_list_prefix, id id_list_prefix id Base case: FIRST(id) = {id} Induction for [2]: FIRST(id) FIRST(id_list_prefix) = {id} Induction for [1]: FIRST(id_list_prefix) FIRST(id_list_prefix) Predict sets for (2) and (1) are identical: not LL(1)! 99 99

Left Recursion Leftmost symbol is a recursive non-inal symbol. This causes a grammar not to be LL(1). Recursive descent would enter infinite recursion. It is desirable for LR grammars. id_list id_list_prefix; id_list_prefix id_list_prefix, id id_list_prefix id 100 100 Left Recursion To parse an id_list_prefix, call the Leftmost symbol is a recursive non-inal symbol. parser for id_list_prefix, which calls the This causes parser a grammar for id_list_prefix, not to be which LL(1). Recursive descent would enter infinite recursion. It is desirable for LR grammars. id_list id_list_prefix; id_list_prefix id_list_prefix, id id_list_prefix id 101 101 Left-Factoring Introducing tail symbols to avoid left recursion. Split a recursive production in an unambiguous prefix and an optional tail. add_op is equivalent to prefix _tail tail _tail ε tail _tail add_op 102 102

Another Predict Set Example [1] cond if then statement [2] cond if then statement else statement 103 103 Another Predict Set Example [1] cond if then statement [2] cond if then statement else statement PREDICT([1]) = {if} PREDICT([2]) = {if} If the next token is an if, which production is the right one? 104 104 Common Prefix Problem Non-disjoint predict sets. In order to predict which production will be applied, all predict sets for a given non-inal need to be disjoint! If there exist two productions A α, A β such that there exists a inal x for which x PREDICT(A α) PREDICT(A β), then an LL(1) parser cannot properly predict which production must be chosen. Can also be addressed with left-ing 105 105

Dangling else Even if left recursion and common prefixes have been removed, a language may not be LL(1). In many languages an else statement in if-thenelse statements is optional. Ambiguous grammar: which if to match else to? if AAA then if BBB then CCC else DDD 106 106 Dangling else if AAA then if BBB then CCC else DDD Can be handled with a tricky LR grammar. There exists no LL(1) parser that can parse such statements. Even though a proper LR(1) parser can handle this, it may not handle it in a method the programmer desires. Good language design avoids such constructs. 107 107 Dangling else To write this code correctly (based on indention) begin and end statements must be added. This is LL compatible. if AAA then if BBB then CCC else DDD if AAA then begin if BBB then CCC end else DDD 108 108

Dangling else statement cond cond if then block_statement cond if then block_statement else block_statement block_statement begin statement* end A grammar that avoids the dangling else problem. 109 109