Syntax Analysis Introduction to parsers Context-free grammars Push-down automata Top-down parsing LL grammars and parsers Bottom-up parsing LR grammars and parsers Bison/Yacc - parser generators Error Handling: Detection & Recovery 1
Introduction to parsers source code Lexical Analyzer token next token Parser syntax tree Semantic Analyzer Symbol Table 2
Context Free Grammar CFG & Terminology Rewrite vs. Reduce Derivation Language and CFL Equivalence & CNF Parsing vs. Derivation lm/rm derivation & parse tree Ambiguity & resolution Expressive power Derivation is the reverse of Parsing. If we know how sentences are derived, we may find a parsing method in the reversed direction. 3
CFG: An Example Terminals: id, +, -, *, /, (, ) Nonterminals: expr, op Productions: expr expr op expr expr ( expr ) expr - expr expr id op + - * / The start symbol: expr 4
Notational Conventions in CFG a,b,c, [+-0-9], id: symbols in A,B,C,,S, expr,stmt: symbols in N U,V,W,,X,Y,Z: grammar symbols in( +N) denotes strings in( +N) * u,v, w, denotes strings in * A is an abbreviation of Alternatives: at RHS A A A 5
Context-Free Grammars A set of terminals: basic symbols from which sentences are formed A set of nonterminals: syntactic variables denoting sets of strings A set of productions: rules specifying how the terminals and nonterminals can be combined to form sentences The start symbol: a distinguished nonterminal denoting the language 7
CFG: Components Specification for Structures & Constituency CFG: formal specification of structure (parse trees) G = {, N, P, S} : terminal symbols N: non-terminal symbols P: production rules S: start symbol 8
CFG: Components : terminal symbols the input symbols of the language programming language: tokens (reserved words, variables, operators, ) natural languages: words or parts of speech pre-terminal: parts of speech (when words are regarded as terminals) N: non-terminal symbols groups of terminals and/or other non-terminals S: start symbol: the largest constituent of a parse tree 9
CFG: Components P: production (re-writing) rules form: A β (A: non-terminal, β: string of terminals and non-terminals) meaning: A re-writes to ( consists of, derived into )β, or β reduced to A start with S-productions (S β) 10
Derivations A derivation step is an application of a production as a rewriting rule E -E A sequence of derivation steps E -E -(E) -(id ) is called a derivation of - ( id ) from E The symbol * denotes derives in zero or more steps ; the symbol + denotes derives in one or more steps 11
CFG: Accepted Languages Context-Free Language Language accepted by a CFG L(G) = { S + (strings of terminals that can be derived from start symbol)} Proof of acceptance: by induction On the number of derivation steps On the length of input string 12
Context-Free Languages A context-free language L(G) is the language defined by a context-free grammar G A string of terminals is in L(G) if and only if S +, is called a sentence of G If S *, where may contain nonterminals, then we call a sentential form of G E -E -(E) -(id ) G 1 is equivalent to G 2 if L(G 1 ) = L(G 2 ) 13
CFG: Equivalence Chomsky Normal Form (CNF) (Chomsky, 1963): ε-free, and Every production rule is in either of the following form: A A1 A2 A a (A1, A2: non-terminal, a: terminal) i.e., two non-terminals or one terminal at the RHS Properties: Generate binary parse tree Good simplification for some algorithms e.g., grammar training with the inside-outside algorithm (Baker 1979) Good tool for theoretical proving e.g., time complexity 14
CFG: Equivalence Every CFG can be converted into a weakly equivalent CNF equivalence: L(G1) = L(G2) strong equivalent: assign the same phrase structure to each sentence (except for renaming non-terminals) weak equivalent: do not assign the same phrase structure to each sentence e.g., A B C D == {A B X, X CD} 15
CFG: An Example Terminals: id, +, -, *, /, (, ) Nonterminals: expr, op Productions: expr expr op expr [R1] expr ( expr ) [R2] expr - expr [R3] expr id [R4] op + - * / The start symbol: expr 16
Left- & Right-most Derivations Each derivation step needs to choose a nonterminal to rewrite an alternative to apply A leftmost derivation always chooses the leftmost nonterminal to rewrite E lm - E lm - ( E ) lm - ( E + E ) lm - ( id + E ) lm - ( id + id ) A rightmost (canonical) derivation always chooses the rightmost nonterminal to rewrite E rm - E rm - ( E ) rm - ( E + E ) rm - (E + id ) rm - ( id + id ) 17
Left- & Right-most Derivations Representation of leftmost/rightmost derivations: Use the sequence of productions (or production numbers) to represent a derivation sequence. Example: E rm - E rm - ( E ) rm - ( E + E ) rm - (E + id ) rm - ( id + id ) => [3], [2], [1], [4], [4] (~ R3, R2, R1, R4, R4) Advantage: A compact representation for parse tree (data compression) Each parse tree has a unique leftmost/rightmost derivation 18
Parse Trees A parse tree is a graphical representation for a derivation that filters out the order of choosing nonterminals for rewriting NP NP PP NP girl in the park 19
Context Free Grammar (CFG): Specification for Structures & Constituency Parse Tree: graphical representation of structure Root node (S): a sentencial level structure Internal nodes: constituents of the sentence Arcs: relationship between parent nodes and their children (constituents) Terminal nodes: surface forms of the input symbols (e.g., words) Bracketed notation: Alternative representation e.g., [I saw [the [girl [in [the park]]]]] 20
Parse Tree: I saw the girl in the park S NP VP 1st parse NP PP NP pron v det n p det n I saw the girl in the park 21
Parse Tree: I saw the girl in the park S NP VP 2nd parse NP NP PP NP pron v det n p det n I saw the girl in the park 22
LM & RM: An Example E - E E lm - E lm - ( E ) lm - ( E + E ) lm - ( id + E ) lm - ( id + id ) ( E ) E + E id id E rm - E rm - ( E ) rm - ( E + E ) rm - ( E + id ) rm - ( id + id ) 23
Parse Trees & Derivations Many derivations may correspond to the same parse tree, but every parse tree has associated with it a unique leftmost and a unique rightmost derivation 24
Ambiguous Grammar A grammar is ambiguous if it produces more than one parse tree for some sentence more than one leftmost/rightmost derivation E E + E id + E id + E * E id + id * E id + id * id E E * E E + E * E id + E * E id + id * E id + id * id 25
Ambiguous Grammar E E E + E E * E id E * E E + E id id id id id 26
Resolving Ambiguity Use disambiguating rules to throw away undesirable parse trees Rewrite grammars by incorporating disambiguating rules into grammars 27
An Example The dangling-else grammar stmt if expr then stmt if expr then stmt else stmt other Two parse trees for if E 1 then if E 2 then S 1 else S 2 28
An Example S if E then S else S if E then S S if E then S Preferred parse: closest then if E then S else S 29
Disambiguating Rules Rule: match each else with the closest previous unmatched then Remove undesired state transitions in the pushdown automaton shift/reduce conflict on else 1 st parse: reduce 2 nd parse: shift 30
Grammar Rewriting stmt m_stmt unm_stmt ; with only paired then-else m_stmt if expr then m_stmt else m_stmt other unm_stmt if expr then stmt if expr then m_stmt else unm_stmt 31
RE vs. CFG Every language described by a RE can also be described by a CFG Example: (a b)*abb A0 a A0 b A0 a A1 A1 ba2 A2 ba3 A3 (1) Right branching (2) Starts with a terminal symbol 32
a( b) A0 A0 RE vs. CFG Regular Grammar: Right branching Starts with a a( b) A0 terminal symbol a A1 b A2 (a b)* abb b A3 33
RE vs. CFG A0 a A0 b A0 a A1 A1 ba2 A2 ba3 A3 A0 a RE: (a b) * abb A2 start a b b 0 1 2 3 b A1 A3 34
RE vs. CFG a DFA for (a b) * abb A2 A0 start A0 b A0 a A1 A1 aa1 b A2 A2 aa1 b A3 A3 aa1 b A0 b a b b 0 a 1 2 3 a b a A1 A3 35
CFG: Expressive Power (cont.) Writing a CFG for a FSA (RE) define a non-terminal Ni for a state with state number i start symbol S = N0 (assuming that state 0 is the initial state) for each transition δ(i,a)=j (from state i to stet j on input alphabet a), add a new production Ni a Nj to P (a== ε Ni Nj) for each final state i, add a new production Ni εto P 36
CFG: Expressive Power CFG vs. Regular Expression (R.E.) Every R.E. can be recognized by a FSA Every FSA can be represented by a CFG with production rules of the form: A a B ε Therefore, L(RE) L(CFG) 38
CFG: Expressive Power (cont.) Chomsky Hierarchy: R.E.: Regular set (recognized by FSAs) CFG: Context-free (Pushdown automata) CSG: Context-sensitive (Linear bounded automata) Unrestricted: Recursively enumerable (Tuning Machine) 39
Push-Down Automata Input Stack Finite Automata Output 40
RE vs. CFG Why use REs for lexical syntax? do not need a notation as powerful as CFGs are more concise and easier to understand than CFGs More efficient lexical analyzers can be constructed from REs than from CFGs Provide a way for modularizing the front end into two manageable-sized components 41
CFG vs. Finite-State Machine Inappropriateness of FSA Constituents: only terminals Recursion: do not allow A => B => A RTN (Recursive Transition Network) FSA with augmentation of recursion arc: terminal or non-terminal if arc is non-terminal: call to a sub-transition network & return upon traversal 42
Nonregular Constructs REs can denote only a fixed number of repetitions or an unspecified number of repetitions of one given construct E.g. a*b* A nonregular construct: L = {a n b n n 1} 43
Non-Context-Free Constructs CFGs can denote only a fixed number of repetitions or an unspecified number of repetitions of one or two (paired) given constructs E.g. a n b n Some non-context-free constructs: L 1 = {wcw w is in (a b)*} declaration/use of identifiers L 2 = {a n b m c n d m n 1 and m 1} #formal arguments/#actual arguments L 3 = {a n b n c n n 0} e.g., b: Backspace, c: under score 44
Context-Free Constructs FA (RE) cannot keep counts CFGs can keep count of two items but not three Similar context-free constructs: L 1 = {wcw R w is in (a b)*, R: reverse order} L 2 = {a n b m c m d n n 1 and m 1} L 2 = {a n b n c m d m n 1 and m 1} L 3 = {a n b n n 1} 45
CFG Parsers 46
Types of CFG Parsers Universal: can parse any CFG grammar CYK, Earley CYK: Exhaustively matching sub-ranges of input tokens against grammar rules, from smaller ranges to larger ranges Earley: Exhaustively enumerating possible expectations from left-to-right, according to current input token and grammar Non-universal: e.g., recursive descent parser Universal (to all grammars) is NOT always efficient 47
Types of CFG Parsers Practical Parsers: [ what is a good parser? ] Simple: simple program structure Left-to-right (or right-to-left) scan middle-out or island driven is often not preferred Top-down or Bottom up matching Efficient: efficient for good/bad inputs Parse normal syntax quickly Detect errors immediately on next token Deterministic: No alternative choices during parsing given next token Small lookahead buffer (also contribute to efficiency) 48
Types of CFG Parsers Top Down: Matching from start symbol down to terminal tokens Bottom Up: Matching input tokens with reducible rules from terminal up to start symbol 49
Efficient CFG Parsers Top Down: LL Parsers Matching from start symbol down to terminal tokens, left-to-right, according to a leftmost derivation sequence Bottom Up: LR Parsers Matching input tokens with reducible rules, left-to-right, from terminal up to start symbol, in a reverse order of rightmost derivation sequence 50
Efficient CFG Parsers Efficient & Deterministic Parsing only possible for some subclasses of grammars with special parsing algorithms Top Down: Parsing LL Grammars with LL Parsers Bottom Up: Parsing LR Grammars with LR Parsers LR grammar is a larger class of grammars than LL 51
Parsing Table Construction for Parsing Table: Efficient Parsers A pre-computed table (according to the grammar), indicating the appropriate action(s) to take in any predefined state when some input token(s) is/are under examination Lookahead symbol(s): the input symbol(s) under examination for determining next action(s) id + * State-0 State-1 State-2 action-1 action-2 Good parsers do not change their codes when the grammar is revised. Table driven. action-3 action-4 num action-5 52
Parsing Table Construction for Efficient Parsers Parsing Table Construction: Decide a pre-defined number of lookaheads to use for predicting next state Define and enumerate all the unique states for the parsing method Decide the actions to take in all states with all possible lookahead(s) 53
Parsing Table Construction for Efficient Parsers X-Parser: you can invent any parser and call it the X-Parser But its parsing algorithm may not handle all grammars deterministically, thus efficiently. X-Grammar: Any grammar whose parsing table for the X- parsing method/x-parser has no conflicting actions in all states Non-X Grammar: has more than one action to take under any state 54
Parsing Table Construction for Efficient Parsers k: The number of lookahead symbols used by a parser to determine the next action A larger number of lookahead symbols tends to make it less possible to have conflicting actions But may result in a much larger table that grows exponentially with the number of lookaheads Does not guarantee unambiguous for some grammars (inherently ambiguous) X(k) Parser: X Parser that uses k lookahead symbols to determine the next action X(k) Grammar: any grammar deterministically parsable with X(k) Parser 55
Types of Grammars Capable of Efficient Parsing LL(k) Grammars Grammars that can be deterministically parsed using an LL(k) parsing algorithm e.g., LL(1) grammar LR(k) Grammars Grammars that can be deterministically parsed using an LR(k) parsing algorithm e.g., SLR(1) grammar, LR(1) grammar, LALR(1) grammar 56
Top-Down CFG Parsers Recursive Descent Parser vs. Non-Recursive LL(1) Parser 57
Top-Down Parsing Construct a parse tree from the root to the leaves using leftmost derivation S c A B A a b a B d input: cad S S S S c A B c A B c A B c A B a b a a d 58
Predictive Parsing A top-down parsing without backtracking there is only one alternative production to choose at each derivation step stmt if expr then stmt else stmt while expr do stmt begin stmt_list end 59
LL(k) Parsing The first L stands for scanning the input from left to right The second L stands for producing a leftmost derivation The k stands for the number of input symbols for lookahead used to choose alternative productions at each derivation step 60
LL(1) Parsing Use one input symbol of lookahead Same as Recursive-descent parsing But, Nonrecursive predictive parsing 61
Recursive Descent Parsing (more) The parser consists of a set of (possibly recursive) procedures Each procedure is associated with a nonterminal of the grammar The sequence of procedures called in processing the input implicitly defines a parse tree for the input 62
An Example type simple id array [ simple ] of type simple integer char num dotdot num 63
An Example array [ num dotdot num ] of integer type array [ simple ] of type num dotdot num simple integer 64
An Example procedure type; begin if lookahead is in { integer, char, num } then simple else if lookahead = id then match(id) else if lookahead = array then begin match(array); match('['); simple; match(']'); match(of); type end else error end; 65
An Example procedure match(t : token); begin if lookahead = t then lookahead := nexttoken else error end; 66
An Example procedure simple; begin if lookahead = integer then match(integer) else if lookahead = char then match(char) else if lookahead = num then begin match(num); match(dotdot); match(num) end else error end; 67
LL(k) Constraint: Left Recursion A grammar is left recursive if it has a nonterminal A such that A + A A A A R R R A A A A * A R R R R 68
Direct/Immediate Left Recursion A A 1 A 2... A m 1 2... n is equivalent to A A i j (i=1,m ; j=1,n) A 1 A' 2 A'... n A' A' 1 A' 2 A'... m A' ( 1 2... n ) ( 1 2... m )* 69
An Example E E + T T T T * F F F ( E ) id E T E' E' + T E' T F T' T' * F T' F ( E ) id 70
Indirect Left Recursion G0: S A a b A A c S d Problem: Indirect Left-Recursion: S A a S d a Scan rules top-down Do not start with symbols defined earlier (=> substitute them if any) Resolve direct recursion Solution-Step1: Indirect to Direct Left-Recursion: A A c A a d b d Solution-Step2: Direct Left-Recursion to Right-Recursion: S A a b A b d A' A' 71 A' ca' a d A'
Indirect Left Recursion Input. Grammar G with no cycles or -production. Output. An equivalent grammar with no left recursion. 1. Arrange the nonterminals in some order A 1, A 2,..., A n 2. for i := 1 to n do begin // Step1: Substitute 1 st -symbols of Ai for j := 1 to i - 1 do begin // which are previous Aj s replace each production of the form A i A j ( j < i ) by the production A i 1 2... k where A j 1 2... k are all the current A j -productions; end eliminate direct left recursion among A i -productions // Step2 end 72
Left Factoring Two alternatives of a nonterminal A have a nontrivial common prefix if, and A 1 2 A A' A' 1 2 73
An Example S i E t S i E t S e S a E b S i E t S S' a S' es E b 74
Top-Down Parsing: as Stack Matching Construct a parse tree from the root to the leaves using leftmost derivation S c A B A a b a B d input: cad S S S S c A B c A B c A B c A B a b a a d 76
Nonrecursive Predictive Parsing General State a b c x y z Input Stack X Non- Recursive: Stack + Driver Program (instead of Recursive procedures) Parsing program (parser/driver) Parsing table Output M[X,a]= {X -> Y 1 Y 2 Y k } Predictive: precomputed parsing actions 77
Nonrecursive Predictive Parsing Expand Non-terminal a b c x y z Input Stack Y 1 Non- Recursive: Stack + Driver Program (instead of Recursive procedures) Y 2 Y k Parsing program (parser/driver) Parsing table Output M[X,a]= {X -> Y 1 Y 2 Y k } Predictive: precomputed parsing actions 78
Nonrecursive Predictive Parsing Match Terminal a b c x y z Input Stack Y 1 =a Non- Recursive: Stack + Driver Program (instead of Recursive procedures) Y 2 Y k Parsing program (parser/driver) Parsing table Output M[X,a]= {X -> Y 1 Y 2 Y k } Predictive: precomputed parsing actions 79
Nonrecursive Predictive Parsing - Error Recovery a b c x y z Input Stack Y 1 =a Non- Recursive: Stack + Driver Program (instead of Recursive procedures) Y 2 Y k =c Parsing program (parser/driver) Parsing table Output M[X,a]= {X -> Y 1 Y 2 Y k } Predictive: precomputed parsing actions 80
Nonrecursive Predictive Parsing - Error Recovery a b c x y z Input Stack Y 1 =a Non- Recursive: Stack + Driver Program (instead of Recursive procedures) Y 2 Y k =c Parsing program (parser/driver) Parsing table Output M[X,a]= {X -> Y 1 Y 2 Y k } Predictive: precomputed parsing actions 81
Stack Operations Match when the top stack symbol is a terminal and it matches the input symbol, pop the top stack symbol and advance the input pointer Expand when the top stack symbol is a nonterminal, replace this symbol by the right hand side of one of its productions Leftmost RHS symbol at Top-of-Stack 83
An Example type simple id array [ simple ] of type simple integer char num dotdot num 84
An Example Action Stack Input E type array [ num dotdot num ] of integer M type of ] simple [ array array [ num dotdot num ] of integer M type of ] simple [ [ num dotdot num ] of integer E type of ] simple num dotdot num ] of integer M type of ] num dotdot num num dotdot num ] of integer M type of ] num dotdot dotdot num ] of integer M type of ] num num ] of integer M type of ] ] of integer M type of of integer E type integer E simple integer M integer integer 85
Parsing program push $S onto the stack, where S is the start symbol set ip to point to the first symbol of w$; // try to match S$ with w$ repeat let X be the top stack symbol and a the symbol pointed to by ip; if X is a terminal or $ then if X = a then pop X from the stack and advance ip else error // or error_recovery() else // X is a nonterminal if M[X, a] = X Y 1 Y 2... Y k then pop X from and push Y k... Y 2 Y 1 onto the stack else error // or error_recovery() until X = $ 86
Parser Driven by a Parsing Table: Non-recursive Descent a b c d X X Y1 Y2 Yk X Z1 Z2 Zm Y1 Y1 1 Y1 2 Z1 Z1 1 Z1 2 X() { // WITHOUT ε-production: X ε if (LA= a ) then Y1(); Y2(); Yk(); else if (LA= b ) Z1(); Z2(); ; Zm(); a in FirstSet( Y1 Y2 Yk ) else ERROR(); // no X ε b in FirstSet( Z1 Z2 Zm ) // else RETURN; if X exists } // Recursive decent procedure for matching X 87
Parser Driven by a Parsing Table: Non-recursive Descent a b c d X X Y1 Y2 Yk X Z1 Z2 Zm X Y1 Y1 1 Y1 2 Z1 Z1 1 Z1 2 a in FirstSet( Y1 Y2 Yk ) b in FirstSet( Z1 Z2 Zm ) d in FollowSet(X) (S => * X d ) X() { // WITH ε-production: X ε if (LA= a ) then else if (LA= b ) Y1(); Y2(); Yk(); Z1(); Z2(); ; Zm(); // else ERROR(); // no X ε else if (LA=??) RETURN; // if X exists } // Recursive decent procedure for matching X 88
First Sets The first set of a string is the set of terminals that begin the strings derived from. If *, then is also in the first set of. Used simply to flag whether can be null for computing First Set Not for matching any real input when parsing FIRST( ) = {a * a }+{, if * } FIRST( ) includes { }: means that * 89
Compute First Sets If X is terminal, then FIRST(X) is {X} If X is nonterminal and X is a production, then add to FIRST(X) If X is nonterminal and X Y 1 Y 2... Y k is a production, then add a to FIRST(X) if for some i, a is in FIRST(Y i ) and is in all of FIRST(Y 1 ),..., FIRST(Y i-1 ). If is in FIRST(Y j ) for all j, then add to FIRST(X) 90
Follow Sets What to do with matching null: A? TD Recursive Descent Parsing: assumes success LL: more predictive => Follow Set of A The follow set of a nonterminal A is the set of terminals that can appear immediately to the right of A in some sentential form, namely, S * A a a is in the follow set of A. 91
Compute Follow Sets Initialization: Place $ in FOLLOW(S), where S is the start symbol and $ is the input right end marker. If there is a production A B, then everything in FIRST( ) except for is placed in FOLLOW(B) is not considered a visible input to follow any symbol If there is a production A B or A B where FIRST( ) contains (i.e., * ), then everything in FOLLOW(A) is in FOLLOW(B) S * A a implies S * B a YES: every symbol that can follow A will also follow B NO!: every symbol that can follow B will also follow A 92
An Example E T E' E' +TE' T F T' T' *FT' F ( E ) id FIRST(E) = FIRST(T) = FIRST(F) = { (, id } FIRST(E') = { +, } FIRST(T') = { *, } FOLLOW(E) = FOLLOW(E') = { ), $ } FOLLOW(T) = FOLLOW(T') = { +, ), $ } FOLLOW(F) = { +, *, ), $ } 93
Constructing Parsing Table Input. Grammar G. Output. Parsing Table M. Method. 1. For each production A of the grammar, do steps 2 and 3. 2. For each terminal a in FIRST( ), add A to M[A, a]. 3. If is in FIRST( ) [A * ], add A to M[A, b] for each terminal b [including $ ] in FOLLOW(A). - If is in FIRST( ) and $ is in FOLLOW(A), add A to M[A, $]. 4. Make each undefined entry of M be error. 94
LL(1) Parsing Table Construction A B C a in First( ) A b in Follow(A) c not in First( ) or Follow(A) A ( * ) error When to apply A? including A A() { // WITH/WITHOUT ε-productions: A ( * ) if (LA= a in First(Y1 Y2 Yk)) then Y1(); Y2(); Yk(); else if (LA= b in Follow(A) & εin First(Z1 Z2... )) Z1(); Z2(); ; Zm(); // Nullable else ERROR(); } // Recursive version of LL(1) parser 95
An Example id + * ( ) $ E E TE' E TE' E' E' +TE' E' E' T T FT' T FT' T' T' T' *FT' T' T' F F id F (E) 96
An Example Stack Input Output $E id + id * id$ $E'T id + id * id$ E TE' $E'T'F id + id * id$ T FT' $E'T'id id + id * id$ F id $E'T' + id * id$ $E' + id * id$ T' $E'T+ + id * id$ E' +TE' $E'T id * id$ $E'T'F id * id$ T FT' $E'T'id id * id$ F id $E'T' * id$ $E'T'F* * id$ T' *FT' $E'T'F id$ $E'T'id id$ F id $E'T' $ $E' $ T' $ $ E' 97
LL(1) Grammars A grammar is an LL(1) grammar if its predictive parsing table has no multiplydefined entries 98
A Counter Example S ietss' a S' es E b e FOLLOW(S ) a b e i t $ S S a S ietss' S' S' S' S' es E E b Disambiguation: matching closest then e FIRST(e S) 99
LL(1) Grammars or Not?? A grammar G is LL(1) iff whenever A are two distinct productions of G, the following conditions hold: For no terminal a do both and derive strings beginning with a. or M[A, first( )&first( )] entries will have conflicting actions At most one of and can derive the empty string or M[A, follow(a)] entries have conflicting actions If *, then does not derive any string beginning with a terminal in FOLLOW(A). or M[A, first( )&follow(a)] entries have conflicting actions 100
Non-LL(1) Grammar: Ambiguous According to LL(1) Parsing Table Construction a in First( ) & First( ) b in Follow(A) a in First( ) & Follow(A) A A A ( * ) A (/ * ) (but * a ) A A ( * ) A ( * ) B C When will A & A appear in the same table cell?? 101
LL(1) Grammars or Not?? If G is left-recursive or ambiguous, then M will have at least one multiply-defined entry => non-ll(1) E.g., X Xa b => FIRST(X) = {b} (and, of course, FIRST(b) = {b}) => M[X,b] includes both {X Xa} and {X b} Ambiguous G and G with left-recursive productions can not be LL(1). No LL(1) grammar can be ambiguous 102
Error Recovery for LL Parsers 103
Syntactic Errors Empty entries in a parsing table: Syntactic error is encountered when the lookahead symbol corresponding to this entry is in input buffer Error Recovery information can be encoded in such entries to take appropriate actions upon error Error Detection: (1) Stacktop = x && x!= input (a) (2) Stacktop = A && M[A, a] = empty (error) 104
Error Recovery Strategies Panic mode: skip tokens until a token in a set of synchronizing tokens appears INS(eration) type of errors sync at delimiters, keywords,, that have clear functions Phrase Level Recovery local INS(eration), DEL(eation), SUB(stitution) types of errors Error Production define error patterns in grammar Global Correction [Grammar Correction] minimum distance correction 105
Error Recovery Panic Mode Panic mode: skip tokens until a token in a set of synchronizing tokens appears Commonly used Synchronizing tokens: SUB(A,ip): use FOLLOW(A) as sync set for A (pop A) use the FIRST set of a higher construct as sync set for a lower construct INS(ip): use FIRST(A) as sync set for A *ip= : use the production deriving as the default DEL(ip): If a terminal on stack cannot be matched, pop the terminal 106
Error Recovery Panic Mode Action Stack Input SUB(A,ip) INS(ip) DEL(ip) X A *ip Follow(A) A A *ip First(A) A x *ip x X X A A x *ip Follow(A) *ip First(A) x *ip 107
Error Recovery Actions Using Follow & First Sets to Sync Expanding non-terminal A: M[A,a] = error (blank): Skip a in input = delete all such a (until sync with sync symbol, b) /* panic */ M[A,b] = sync (at FOLLOW(A)) Pop A from stack = b is a sync symbol following A M[A,b] = A (sync at FIRST(A)) Expand A (same as normal parsing action) Matching terminal x : (*sp= x )!= a Pop(x) from stack = missing input token x 108
FOLLOW(X) An is Example used to Expand productions or Sync (on errors) FOLLOW(E)=FOLLOW(E )={),$} id + * ( ) $ E E TE' E TE' sync sync E' E' +TE' E' E' T T FT' sync T FT' sync sync T' T' T' *FT' T' T' F F id sync sync F (E) sync sync FOLLOW(F)={+,*,),$} FIRST(X) is used to Expand non- productions or Sync (on errors) 109
An Example Stack Input Output $E ) id * + id$ error, skip ) $E id * + id$ id is in FIRST(E) $E'T id * + id$ E TE' $E'T'F id * + id$ T FT' $E'T'id id * + id$ F id $E'T' * + id$ $E'T'F* * + id$ T' *FT' $E'T'F + id$ error, M[F,+]=synch / FOLLOW(F) $E'T' + id$ F popped $E' + id$ T' $E'T+ + id$ E' +TE' $E'T id$ $E'T'F id$ T FT' $E'T'id id$ F id $E'T' $ $E' $ T' $ $ E' 110
Parse Tree - Error Recovered ) id * + id => id * F + id E ) T E F T + T E id * F T F T ε ε id ε 111