Chapter 3. Parsing #1

Similar documents
Parsing #1. Leonidas Fegaras. CSE 5317/4305 L3: Parsing #1 1

CSE 3302 Programming Languages Lecture 2: Syntax

Parsing. Roadmap. > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing

CSCI312 Principles of Programming Languages

Syntactic Analysis. Top-Down Parsing

8 Parsing. Parsing. Top Down Parsing Methods. Parsing complexity. Top down vs. bottom up parsing. Top down vs. bottom up parsing

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Syntax Analysis, III Comp 412

Parsing. Note by Baris Aktemur: Our slides are adapted from Cooper and Torczon s slides that they prepared for COMP 412 at Rice.

Top down vs. bottom up parsing

3. Parsing. Oscar Nierstrasz

Building a Parser III. CS164 3:30-5:00 TT 10 Evans. Prof. Bodik CS 164 Lecture 6 1

Parsing III. (Top-down parsing: recursive descent & LL(1) )

Downloaded from Page 1. LR Parsing

Compilers. Yannis Smaragdakis, U. Athens (original slides by Sam

Types of parsing. CMSC 430 Lecture 4, Page 1

CS1622. Today. A Recursive Descent Parser. Preliminaries. Lecture 9 Parsing (4)

Syntax Analysis, III Comp 412

Introduction to Parsing. Comp 412

Parsing II Top-down parsing. Comp 412

CA Compiler Construction

shift-reduce parsing

CS 406/534 Compiler Construction Parsing Part I

Note that for recursive descent to work, if A ::= B1 B2 is a grammar rule we need First k (B1) disjoint from First k (B2).

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7

Administrativia. WA1 due on Thu PA2 in a week. Building a Parser III. Slides on the web site. CS164 3:30-5:00 TT 10 Evans.

Compilers. Predictive Parsing. Alex Aiken

Lexical and Syntax Analysis. Top-Down Parsing

Part 5 Program Analysis Principles and Techniques

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7

Syntax Analysis. Martin Sulzmann. Martin Sulzmann Syntax Analysis 1 / 38

Parsing Part II (Top-down parsing, left-recursion removal)

EDAN65: Compilers, Lecture 06 A LR parsing. Görel Hedin Revised:

LL(k) Parsing. Predictive Parsers. LL(k) Parser Structure. Sample Parse Table. LL(1) Parsing Algorithm. Push RHS in Reverse Order 10/17/2012

Prelude COMP 181 Tufts University Computer Science Last time Grammar issues Key structure meaning Tufts University Computer Science

CS 314 Principles of Programming Languages

Bottom-Up Parsing. Parser Generation. LR Parsing. Constructing LR Parser

Parser Generation. Bottom-Up Parsing. Constructing LR Parser. LR Parsing. Construct parse tree bottom-up --- from leaves to the root

Context-Free Grammar. Concepts Introduced in Chapter 2. Parse Trees. Example Grammar and Derivation

Lexical and Syntax Analysis

Section A. A grammar that produces more than one parse tree for some sentences is said to be ambiguous.

Parsers. Xiaokang Qiu Purdue University. August 31, 2018 ECE 468

COMP 181. Prelude. Next step. Parsing. Study of parsing. Specifying syntax with a grammar

Syntax-Directed Translation. Lecture 14

Wednesday, September 9, 15. Parsers

Parsers. What is a parser. Languages. Agenda. Terminology. Languages. A parser has two jobs:

Context-free grammars (CFG s)

LL(1) Grammars. Example. Recursive Descent Parsers. S A a {b,d,a} A B D {b, d, a} B b { b } B λ {d, a} D d { d } D λ { a }

Derivations vs Parses. Example. Parse Tree. Ambiguity. Different Parse Trees. Context Free Grammars 9/18/2012

4 (c) parsing. Parsing. Top down vs. bo5om up parsing

Table-Driven Top-Down Parsers

Monday, September 13, Parsers

Syntax Analysis Part I

A programming language requires two major definitions A simple one pass compiler

COP 3402 Systems Software Syntax Analysis (Parser)

Compilerconstructie. najaar Rudy van Vliet kamer 140 Snellius, tel rvvliet(at)liacs(dot)nl. college 3, vrijdag 22 september 2017

Ambiguity, Precedence, Associativity & Top-Down Parsing. Lecture 9-10

Course Overview. Introduction (Chapter 1) Compiler Frontend: Today. Compiler Backend:

Topic 3: Syntax Analysis I

CS415 Compilers. Syntax Analysis. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Wednesday, August 31, Parsers

Lexical Analysis. Introduction

Front End. Hwansoo Han

Part III : Parsing. From Regular to Context-Free Grammars. Deriving a Parser from a Context-Free Grammar. Scanners and Parsers.

Syntax Analysis/Parsing. Context-free grammars (CFG s) Context-free grammars vs. Regular Expressions. BNF description of PL/0 syntax

Defining syntax using CFGs

The procedure attempts to "match" the right hand side of some production for a nonterminal.

LANGUAGE PROCESSORS. Introduction to Language processor:

Part 3. Syntax analysis. Syntax analysis 96

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

Abstract Syntax Trees & Top-Down Parsing

Abstract Syntax Trees & Top-Down Parsing

Abstract Syntax Trees & Top-Down Parsing

Syntax Analysis, V Bottom-up Parsing & The Magic of Handles Comp 412

UNIT-III BOTTOM-UP PARSING

Parsing Part II. (Ambiguity, Top-down parsing, Left-recursion Removal)

EDAN65: Compilers, Lecture 04 Grammar transformations: Eliminating ambiguities, adapting to LL parsing. Görel Hedin Revised:

Review main idea syntax-directed evaluation and translation. Recall syntax-directed interpretation in recursive descent parsers

CS2210: Compiler Construction Syntax Analysis Syntax Analysis

CSE 401 Midterm Exam Sample Solution 2/11/15

CSE 130 Programming Language Principles & Paradigms Lecture # 5. Chapter 4 Lexical and Syntax Analysis

Recursive Descent Parsers

Syntactic Analysis. Top-Down Parsing. Parsing Techniques. Top-Down Parsing. Remember the Expression Grammar? Example. Example

1 Introduction. 2 Recursive descent parsing. Predicative parsing. Computer Language Implementation Lecture Note 3 February 4, 2004

CMSC 330: Organization of Programming Languages

Building Compilers with Phoenix

Extra Credit Question

Table-Driven Parsing

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

CS 314 Principles of Programming Languages. Lecture 3

COMP3131/9102: Programming Languages and Compilers

Ambiguity. Grammar E E + E E * E ( E ) int. The string int * int + int has two parse trees. * int

Building A Recursive Descent Parser. Example: CSX-Lite. match terminals, and calling parsing procedures to match nonterminals.

CMSC 330: Organization of Programming Languages. Architecture of Compilers, Interpreters

PART 3 - SYNTAX ANALYSIS. F. Wotawa TU Graz) Compiler Construction Summer term / 309

Introduction to parsers

Lexical and Syntax Analysis (2)

How do LL(1) Parsers Build Syntax Trees?

Abstract Syntax. Leonidas Fegaras. CSE 5317/4305 L5: Abstract Syntax 1

Theoretical Part. Chapter one:- - What are the Phases of compiler? Answer:

Transcription:

Chapter 3 Parsing #1

Parser source file get next character scanner get token parser AST token A parser recognizes sequences of tokens according to some grammar and generates Abstract Syntax Trees (ASTs) A context-free grammar (CFG) has a finite set of terminals (tokens) a finite set of nonterminals from which one is the start symbol and a finite set of productions of the form: A ::= X1 X2... Xn where A is a nonterminal and each Xi is either a terminal or nonterminal symbol

Expressions: E ::= E + T E - T T T ::= T * F T / F F F ::= num Example id Nonterminals: E T F Start symbol: E Terminals: + - * / id num Example: x+2*y... or equivalently: E ::= E + T E ::= E - T E ::= T T ::= T * F T ::= T / F T ::= F F ::= num F ::= id

Derivations Notation: terminals: t, s,... nonterminals: A, B,... symbol (terminal or nonterminal): X, Y,... sequence of symbols: a, b,... Given a production: A ::= X 1 X 2... X n the form aab => ax 1 X 2... X n b is called a derivation eg, using the production T ::= T * F we get T / F + 1 - x => T * F / F + 1 - x Leftmost derivation: when you always expand the leftmost nonterminal in the sequence Rightmost derivation:... rightmost nonterminal

Scanning vs. Parsing Regular expressions are used to classify: identifiers, numbers, keywords REs are more concise and simpler for tokens than a grammar more efficient scanners can be built from REs (DFAs) than grammars Context-free grammars are used to count: Brackets, (), begin end, if.. Then.. Else imparting structure: expressions Syntactic analysis is complicated enough: grammar for C has around 200 productions. Factoring out lexical analysis as a separate phase makescompiler more manageable.

Top-down Parsing It starts from the start symbol of the grammar and applies derivations until the entire input string is derived Example that matches the input sequence id(x) + num(2) * id(y) E => E + T use E ::= E + T => E + T * F use T ::= T * F => T + T * F use E ::= T => T + F * F use T ::= F => T + num * F use F ::= num => F + num * F use T ::= F => id + num * F use F ::= id => id + num * id use F ::= id You may have more than one choice at each derivation step: may have multiple nonterminals in each sequence for each nonterminal in the sequence, may have many rules to choose from Wrong predictions will cause backtracking need predictive parsing that never backtracks

Bottom-up Parsing It starts from the input string and uses derivations in the opposite directions (from right to left) until you derive the start symbol Previous example: id(x) + num(2) * id(y) <= id(x) + num(2) * F use F ::= id <= id(x) + F * F use F ::= num <= id(x) + T * F use T ::= F <= id(x) + T use T ::= T * F <= F + T use F ::= id <= T + T use T ::= F <= E + T use E ::= T <= E use E ::= E + T At each derivation step, need to recognize a handle (the sequence of symbols that matches the right-hand-side of a production)

Parse Tree Given the derivations used in the top-down/bottom-up parsing of an input sequence, a parse tree has the start symbol as the root the terminals of the input sequence as leafs for each production A ::= X 1 X 2... X n used in a derivation, a node A with children X 1 X 2... X n E E T T T F F F id(x) + num(2) * id(y) E => E + T => E + T * F => T + T * F => T + F * F => T + num * F => F + num * F => id + num * F => id + num * id

Playing with Associativity What about this grammar? E ::= T + E T - E T T ::= F * T F / T F F ::= num id E T E F T E F T F id(x) + id(y) + id(z) Right associative Now x+y+z is equivalent to x+(y+z)

Ambiguous Grammars What about this grammar? E ::= E + E E - E E E * E E / E E E num id E E id(x) * id(y) + id(z) E E E E E id(x) * id(y) + id(z) Operators + - * / have the same precedence! It is ambiguous: has more than one parse tree for the same input sequence (depending which derivations are applied each time)

Predictive Parsing The goal is to construct a top-down parser that never backtracks Always leftmost derivations left recursion is bad! We must transform a grammar in two ways: eliminate left recursion perform left factoring These rules eliminate most common causes for backtracking although they do not guarantee a completely backtrack-free parsing

Left Recursion Elimination For example, the grammar A ::= A a b recognizes the regular expression ba*. But a top-down parser may have hard time to decide which rule to use Need to get rid of left recursion: A ::= b A' A' ::= a A epsilon ie, A' parses the RE a*. The second rule is recursive, but not left recursive

Left Recursion Elimination (cont.) For each nonterminal X, we partition the productions for X into two groups: one that contains the left recursive productions the other with the rest That is: X ::= X a 1... X ::= X a n where a and b are symbol sequences. Then we eliminate the left recursion by rewriting these rules into: X ::= b 1 X'... X ::= b m X' X ::= b 1... X ::= b m X' ::= a 1 X'... X' ::= a n X' X' ::=

Example E ::= E + T E - T T T ::= T * F T / F F F ::= num id E ::= T E' E' ::= + T E' - T E' T ::= F T' T' ::= * F T' / F T' F ::= num id

Example A grammar that recognizes regular expressions: R ::= R R R bar R R * ( R ) char After left recursion elimination: R ::= ( R ) R' char R' R' ::= R R' bar R R' * R'

Left Factoring Factors out common prefixes: X ::= a b 1... X ::= a b n becomes: X ::= a X' X' ::= b 1... X' ::= b n Example: E ::= T + E T - E T E ::= T E' E' ::= + E - E

Recursive Descent Parsing E ::= T E' E' ::= + T E' - T E' T ::= F T' T' ::= * F T' / F T' F ::= num id static void E () { T(); Eprime(); } static void Eprime () { if (current_token == PLUS) { read_next_token(); T(); Eprime(); } else if (current_token == MINUS) { read_next_token(); T(); Eprime(); }; } static void T () { F(); Tprime(); } static void Tprime() { if (current_token == TIMES) { read_next_token(); F(); Tprime(); } else if (current_token == DIV) { read_next_token(); F(); Tprime(); }; } static void F () { if (current_token == NUM current_token == ID) read_next_token(); else error(); }

Building the tree One of the key jobs of the parser is to build an intermediate representation of the source code. To build an abstract syntax tree, we can simply insert code at the appropriate points: static void F () { if (current_token == NUM current_token == ID) read_next_token(); Push Current_token; static void Tprime() { if (current_token == TIMES) else error(); } { read_next_token(); F(); Tprime(); Pop Tprime-tree; Pop F-tree; Joing them in a new Tprime-Tree; Push Tprime-Tree } else if (current_token == DIV) }

Non-recursive predictive parsing Observation: Our recursive descent parser encodes state information in its runtime stack, or call stack. Using recursive procedure calls to implement a stack abstraction may not be particularly efficient. This suggests other implementation method of : stack-based, table-driven parser

Non-recursive predictive parsing Rather than writing code, we build the table (automatically)

Table driven parses This is true for both top-down (LL) and bottom up (LR) parsers

Predictive Parsing Using a Table The symbol sequence from a derivation is stored in a stack (first symbol on top) if the top of the stack is a terminal, it should match the current token from the input if the top of the stack is a nonterminal X and the current input token is t, we get a rule for the parse table: M[X,t] the rule is used as a derivation to replace X in the stack with the righthand symbols push(s); read_next_token(); repeat X = pop(); if (X is a terminal or '$') if (X == current_token) read_next_token(); else error(); else if (M[X,current_token] == "X ::= Y1 Y2... Yk") { push(yk);... push(y1); } else error(); until X == '$';

Parsing Table Example 1) E ::= T E' $ 2) E' ::= + T E' 3) - T E' 4) 5) T ::= F T' 6) T' ::= * F T' 7) / F T' 8) 9) F ::= num 10) id num id + - * / $ E 1 1 E' 2 3 4 T 5 5 T' 8 8 6 7 8 F 9 10

Example: Parsing x-2*y$ top Stack current_token Rule E x M[E,id] = 1 (using E ::= T E' $) $ E' T x M[T,id] = 5 (using T ::= F T') $ E' T' F x M[F,id] = 10 (using F ::= id) $ E' T' id x read_next_token $ E' T' - M[T',-] = 8 (using T' ::= ) $ E' - M[E',-] = 3 (using E' ::= - T E') $ E' T - - read_next_token $ E' T 2 M[T,num] = 5 (using T ::= F T') $ E' T' F 2 M[F,num] = 9 (using F ::= num) $ E' T' num 2 read_next_token $ E' T' * M[T',*] = 6 (using T' ::= * F T') $ E' T' F * * read_next_token $ E' T' F y M[F,id] = 10 (using F ::= id) $ E' T' id y read_next_token $ E' T' $ M[T',$] = 8 (using T' ::= ) $ E' $ M[E',$] = 4 (using E' ::= ) $ $ stop (accept)

Constructing the Parsing Table FIRST[a] is the set of terminals t that result after a number of derivations on the symbol sequence a ie, a =>... => tb for some symbol sequence b FIRST[ta]={t} eg, FIRST[3+E]={3} FIRST[X]=FIRST[a 1 ] FIRST[a n ] for each production X ::= a i FIRST[Xa]=FIRST[X] but if X has an empty derivation then FIRST[Xa]=FIRST[X] FIRST[a] FOLLOW[X] is the set of all terminals that follow X in any legal derivation find all productions Z ::= a X b in which X appears at the RHS; then FIRST[b] must be included in FOLLOW[X] if b has an empty derivation, FOLLOW[Z] must be included in FOLLOW[X]

Example 1) E ::= T E' $ 2) E' ::= + T E' 3) - T E' 4) 5) T ::= F T' 6) T' ::= * F T' 7) / F T' 8) 9) F ::= num 10) id FIRST FOLLOW E {num,id} {} E' {+,-} {$} T {num,id} {+,-,$} T' {*,/} {+,-,$} F {num,id} {+,-,*,/,$}

Constructing the Parsing Table (cont.) For each rule X ::= a do: for each t in FIRST[a], add X ::= a to M[X,t] if a can be reduced to the empty sequence, then for each t in FOLLOW[X], add X ::= a to M[X,t] 1) E ::= T E' $ 2) E' ::= + T E' 3) - T E' 4) 5) T ::= F T' 6) T' ::= * F T' 7) / F T' 8) 9) F ::= num 10) id FIRST FOLLOW E {num,id} {} E' {+,-} {$} T {num,id} {+,-,$} T' {*,/} {+,-,$} F {num,id} {+,-,*,/,$} num id + - * / $ E 1 1 E' 2 3 4 T 5 5 T' 8 8 6 7 8 F 9 10

Another Example G ::= S $ S ::= ( L ) a L ::= L, S S 0) G := S $ 1) S ::= ( L ) 2) S ::= a 3) L ::= S L' 4) L' ::=, S L' 5) L' ::= ( ) a, $ G 0 0 S 1 2 L 3 3 L' 5 4

LL(1) A grammar is called LL(1) if each element of the parsing table of the grammar has at most one production element the first L in LL(1) means that we read the input from left to right the second L means that it uses left-most derivations only the number 1 means that we need to look one token ahead from the input

Another definition: LL(1) A grammar is LL(1) if and only if for each set of productions A->alpha1 alpha2 alphan: First(alpha1), First(alpha2),, First(alphan) are pairwise disjoint If epsilon can be derived from alphai then First(alphaj) (for all j) has no intersection with follow(a)

Provable facts about LL(1) grammars No left-recursive grammar is LL(1) No ambiguous grammar is LL(1) Some languages have no LL(1) grammar A e free grammar where each alternative expansion for A begins with a distinct terminal is a simple LL(1) grammar.

Error recovery A syntax error occurs when the string of input tokens is not a sentence in the language Error recovery is a way of finding some sentence similar to that string of tokens. This can proceed by deleting, replacing, or inserting tokens. For example, error recovery for T could proceed by inserting a num token. It's not necessary to adjust the actual input; it suffices to pretend that the num was there, print a message, and return normally. Then we can have a error message in the default case for T: default: print("expected id, num, or left-paren");

Error recovery (cont.) recovery by deletion works by skipping tokens until a token in the FOLLOW set is reached. int Tprime_follow [] = {PLUS, RPAREN, EOF}; void Tprime() { switch (tok) { case PLUS: break; case TIMES: eat(times); F(); Tprime(); break; case RPAREN: break; case EOF: break; default: print("expected +, *, right-paren, or end-of-file"); skipto(tprime_follow); } }

Chapter 3 Reading