Chapter 3 Parsing #1
Parser source file get next character scanner get token parser AST token A parser recognizes sequences of tokens according to some grammar and generates Abstract Syntax Trees (ASTs) A context-free grammar (CFG) has a finite set of terminals (tokens) a finite set of nonterminals from which one is the start symbol and a finite set of productions of the form: A ::= X1 X2... Xn where A is a nonterminal and each Xi is either a terminal or nonterminal symbol
Expressions: E ::= E + T E - T T T ::= T * F T / F F F ::= num Example id Nonterminals: E T F Start symbol: E Terminals: + - * / id num Example: x+2*y... or equivalently: E ::= E + T E ::= E - T E ::= T T ::= T * F T ::= T / F T ::= F F ::= num F ::= id
Derivations Notation: terminals: t, s,... nonterminals: A, B,... symbol (terminal or nonterminal): X, Y,... sequence of symbols: a, b,... Given a production: A ::= X 1 X 2... X n the form aab => ax 1 X 2... X n b is called a derivation eg, using the production T ::= T * F we get T / F + 1 - x => T * F / F + 1 - x Leftmost derivation: when you always expand the leftmost nonterminal in the sequence Rightmost derivation:... rightmost nonterminal
Scanning vs. Parsing Regular expressions are used to classify: identifiers, numbers, keywords REs are more concise and simpler for tokens than a grammar more efficient scanners can be built from REs (DFAs) than grammars Context-free grammars are used to count: Brackets, (), begin end, if.. Then.. Else imparting structure: expressions Syntactic analysis is complicated enough: grammar for C has around 200 productions. Factoring out lexical analysis as a separate phase makescompiler more manageable.
Top-down Parsing It starts from the start symbol of the grammar and applies derivations until the entire input string is derived Example that matches the input sequence id(x) + num(2) * id(y) E => E + T use E ::= E + T => E + T * F use T ::= T * F => T + T * F use E ::= T => T + F * F use T ::= F => T + num * F use F ::= num => F + num * F use T ::= F => id + num * F use F ::= id => id + num * id use F ::= id You may have more than one choice at each derivation step: may have multiple nonterminals in each sequence for each nonterminal in the sequence, may have many rules to choose from Wrong predictions will cause backtracking need predictive parsing that never backtracks
Bottom-up Parsing It starts from the input string and uses derivations in the opposite directions (from right to left) until you derive the start symbol Previous example: id(x) + num(2) * id(y) <= id(x) + num(2) * F use F ::= id <= id(x) + F * F use F ::= num <= id(x) + T * F use T ::= F <= id(x) + T use T ::= T * F <= F + T use F ::= id <= T + T use T ::= F <= E + T use E ::= T <= E use E ::= E + T At each derivation step, need to recognize a handle (the sequence of symbols that matches the right-hand-side of a production)
Parse Tree Given the derivations used in the top-down/bottom-up parsing of an input sequence, a parse tree has the start symbol as the root the terminals of the input sequence as leafs for each production A ::= X 1 X 2... X n used in a derivation, a node A with children X 1 X 2... X n E E T T T F F F id(x) + num(2) * id(y) E => E + T => E + T * F => T + T * F => T + F * F => T + num * F => F + num * F => id + num * F => id + num * id
Playing with Associativity What about this grammar? E ::= T + E T - E T T ::= F * T F / T F F ::= num id E T E F T E F T F id(x) + id(y) + id(z) Right associative Now x+y+z is equivalent to x+(y+z)
Ambiguous Grammars What about this grammar? E ::= E + E E - E E E * E E / E E E num id E E id(x) * id(y) + id(z) E E E E E id(x) * id(y) + id(z) Operators + - * / have the same precedence! It is ambiguous: has more than one parse tree for the same input sequence (depending which derivations are applied each time)
Predictive Parsing The goal is to construct a top-down parser that never backtracks Always leftmost derivations left recursion is bad! We must transform a grammar in two ways: eliminate left recursion perform left factoring These rules eliminate most common causes for backtracking although they do not guarantee a completely backtrack-free parsing
Left Recursion Elimination For example, the grammar A ::= A a b recognizes the regular expression ba*. But a top-down parser may have hard time to decide which rule to use Need to get rid of left recursion: A ::= b A' A' ::= a A epsilon ie, A' parses the RE a*. The second rule is recursive, but not left recursive
Left Recursion Elimination (cont.) For each nonterminal X, we partition the productions for X into two groups: one that contains the left recursive productions the other with the rest That is: X ::= X a 1... X ::= X a n where a and b are symbol sequences. Then we eliminate the left recursion by rewriting these rules into: X ::= b 1 X'... X ::= b m X' X ::= b 1... X ::= b m X' ::= a 1 X'... X' ::= a n X' X' ::=
Example E ::= E + T E - T T T ::= T * F T / F F F ::= num id E ::= T E' E' ::= + T E' - T E' T ::= F T' T' ::= * F T' / F T' F ::= num id
Example A grammar that recognizes regular expressions: R ::= R R R bar R R * ( R ) char After left recursion elimination: R ::= ( R ) R' char R' R' ::= R R' bar R R' * R'
Left Factoring Factors out common prefixes: X ::= a b 1... X ::= a b n becomes: X ::= a X' X' ::= b 1... X' ::= b n Example: E ::= T + E T - E T E ::= T E' E' ::= + E - E
Recursive Descent Parsing E ::= T E' E' ::= + T E' - T E' T ::= F T' T' ::= * F T' / F T' F ::= num id static void E () { T(); Eprime(); } static void Eprime () { if (current_token == PLUS) { read_next_token(); T(); Eprime(); } else if (current_token == MINUS) { read_next_token(); T(); Eprime(); }; } static void T () { F(); Tprime(); } static void Tprime() { if (current_token == TIMES) { read_next_token(); F(); Tprime(); } else if (current_token == DIV) { read_next_token(); F(); Tprime(); }; } static void F () { if (current_token == NUM current_token == ID) read_next_token(); else error(); }
Building the tree One of the key jobs of the parser is to build an intermediate representation of the source code. To build an abstract syntax tree, we can simply insert code at the appropriate points: static void F () { if (current_token == NUM current_token == ID) read_next_token(); Push Current_token; static void Tprime() { if (current_token == TIMES) else error(); } { read_next_token(); F(); Tprime(); Pop Tprime-tree; Pop F-tree; Joing them in a new Tprime-Tree; Push Tprime-Tree } else if (current_token == DIV) }
Non-recursive predictive parsing Observation: Our recursive descent parser encodes state information in its runtime stack, or call stack. Using recursive procedure calls to implement a stack abstraction may not be particularly efficient. This suggests other implementation method of : stack-based, table-driven parser
Non-recursive predictive parsing Rather than writing code, we build the table (automatically)
Table driven parses This is true for both top-down (LL) and bottom up (LR) parsers
Predictive Parsing Using a Table The symbol sequence from a derivation is stored in a stack (first symbol on top) if the top of the stack is a terminal, it should match the current token from the input if the top of the stack is a nonterminal X and the current input token is t, we get a rule for the parse table: M[X,t] the rule is used as a derivation to replace X in the stack with the righthand symbols push(s); read_next_token(); repeat X = pop(); if (X is a terminal or '$') if (X == current_token) read_next_token(); else error(); else if (M[X,current_token] == "X ::= Y1 Y2... Yk") { push(yk);... push(y1); } else error(); until X == '$';
Parsing Table Example 1) E ::= T E' $ 2) E' ::= + T E' 3) - T E' 4) 5) T ::= F T' 6) T' ::= * F T' 7) / F T' 8) 9) F ::= num 10) id num id + - * / $ E 1 1 E' 2 3 4 T 5 5 T' 8 8 6 7 8 F 9 10
Example: Parsing x-2*y$ top Stack current_token Rule E x M[E,id] = 1 (using E ::= T E' $) $ E' T x M[T,id] = 5 (using T ::= F T') $ E' T' F x M[F,id] = 10 (using F ::= id) $ E' T' id x read_next_token $ E' T' - M[T',-] = 8 (using T' ::= ) $ E' - M[E',-] = 3 (using E' ::= - T E') $ E' T - - read_next_token $ E' T 2 M[T,num] = 5 (using T ::= F T') $ E' T' F 2 M[F,num] = 9 (using F ::= num) $ E' T' num 2 read_next_token $ E' T' * M[T',*] = 6 (using T' ::= * F T') $ E' T' F * * read_next_token $ E' T' F y M[F,id] = 10 (using F ::= id) $ E' T' id y read_next_token $ E' T' $ M[T',$] = 8 (using T' ::= ) $ E' $ M[E',$] = 4 (using E' ::= ) $ $ stop (accept)
Constructing the Parsing Table FIRST[a] is the set of terminals t that result after a number of derivations on the symbol sequence a ie, a =>... => tb for some symbol sequence b FIRST[ta]={t} eg, FIRST[3+E]={3} FIRST[X]=FIRST[a 1 ] FIRST[a n ] for each production X ::= a i FIRST[Xa]=FIRST[X] but if X has an empty derivation then FIRST[Xa]=FIRST[X] FIRST[a] FOLLOW[X] is the set of all terminals that follow X in any legal derivation find all productions Z ::= a X b in which X appears at the RHS; then FIRST[b] must be included in FOLLOW[X] if b has an empty derivation, FOLLOW[Z] must be included in FOLLOW[X]
Example 1) E ::= T E' $ 2) E' ::= + T E' 3) - T E' 4) 5) T ::= F T' 6) T' ::= * F T' 7) / F T' 8) 9) F ::= num 10) id FIRST FOLLOW E {num,id} {} E' {+,-} {$} T {num,id} {+,-,$} T' {*,/} {+,-,$} F {num,id} {+,-,*,/,$}
Constructing the Parsing Table (cont.) For each rule X ::= a do: for each t in FIRST[a], add X ::= a to M[X,t] if a can be reduced to the empty sequence, then for each t in FOLLOW[X], add X ::= a to M[X,t] 1) E ::= T E' $ 2) E' ::= + T E' 3) - T E' 4) 5) T ::= F T' 6) T' ::= * F T' 7) / F T' 8) 9) F ::= num 10) id FIRST FOLLOW E {num,id} {} E' {+,-} {$} T {num,id} {+,-,$} T' {*,/} {+,-,$} F {num,id} {+,-,*,/,$} num id + - * / $ E 1 1 E' 2 3 4 T 5 5 T' 8 8 6 7 8 F 9 10
Another Example G ::= S $ S ::= ( L ) a L ::= L, S S 0) G := S $ 1) S ::= ( L ) 2) S ::= a 3) L ::= S L' 4) L' ::=, S L' 5) L' ::= ( ) a, $ G 0 0 S 1 2 L 3 3 L' 5 4
LL(1) A grammar is called LL(1) if each element of the parsing table of the grammar has at most one production element the first L in LL(1) means that we read the input from left to right the second L means that it uses left-most derivations only the number 1 means that we need to look one token ahead from the input
Another definition: LL(1) A grammar is LL(1) if and only if for each set of productions A->alpha1 alpha2 alphan: First(alpha1), First(alpha2),, First(alphan) are pairwise disjoint If epsilon can be derived from alphai then First(alphaj) (for all j) has no intersection with follow(a)
Provable facts about LL(1) grammars No left-recursive grammar is LL(1) No ambiguous grammar is LL(1) Some languages have no LL(1) grammar A e free grammar where each alternative expansion for A begins with a distinct terminal is a simple LL(1) grammar.
Error recovery A syntax error occurs when the string of input tokens is not a sentence in the language Error recovery is a way of finding some sentence similar to that string of tokens. This can proceed by deleting, replacing, or inserting tokens. For example, error recovery for T could proceed by inserting a num token. It's not necessary to adjust the actual input; it suffices to pretend that the num was there, print a message, and return normally. Then we can have a error message in the default case for T: default: print("expected id, num, or left-paren");
Error recovery (cont.) recovery by deletion works by skipping tokens until a token in the FOLLOW set is reached. int Tprime_follow [] = {PLUS, RPAREN, EOF}; void Tprime() { switch (tok) { case PLUS: break; case TIMES: eat(times); F(); Tprime(); break; case RPAREN: break; case EOF: break; default: print("expected +, *, right-paren, or end-of-file"); skipto(tprime_follow); } }
Chapter 3 Reading