Lexical and Syntax Analysis. Top-Down Parsing

Similar documents
Lexical and Syntax Analysis

A programming language requires two major definitions A simple one pass compiler

CS1622. Today. A Recursive Descent Parser. Preliminaries. Lecture 9 Parsing (4)

Parsing. Roadmap. > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing

Wednesday, September 9, 15. Parsers

Parsers. What is a parser. Languages. Agenda. Terminology. Languages. A parser has two jobs:

3. Parsing. Oscar Nierstrasz

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7

Top-Down Parsing and Intro to Bottom-Up Parsing. Lecture 7

Syntax Analysis Part I

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Parsers. Xiaokang Qiu Purdue University. August 31, 2018 ECE 468

Monday, September 13, Parsers

Top down vs. bottom up parsing

Building Compilers with Phoenix

Abstract Syntax Trees & Top-Down Parsing

LL(k) Parsing. Predictive Parsers. LL(k) Parser Structure. Sample Parse Table. LL(1) Parsing Algorithm. Push RHS in Reverse Order 10/17/2012

Part 3. Syntax analysis. Syntax analysis 96

Wednesday, August 31, Parsers

Context-Free Grammar. Concepts Introduced in Chapter 2. Parse Trees. Example Grammar and Derivation

Abstract Syntax Trees & Top-Down Parsing

Abstract Syntax Trees & Top-Down Parsing

Table-Driven Parsing

Administrativia. WA1 due on Thu PA2 in a week. Building a Parser III. Slides on the web site. CS164 3:30-5:00 TT 10 Evans.

1 Introduction. 2 Recursive descent parsing. Predicative parsing. Computer Language Implementation Lecture Note 3 February 4, 2004

COP 3402 Systems Software Top Down Parsing (Recursive Descent)

Syntax Analysis. COMP 524: Programming Language Concepts Björn B. Brandenburg. The University of North Carolina at Chapel Hill

COP 3402 Systems Software Syntax Analysis (Parser)

Syntax Analysis. The Big Picture. The Big Picture. COMP 524: Programming Languages Srinivas Krishnan January 25, 2011

Ambiguity, Precedence, Associativity & Top-Down Parsing. Lecture 9-10

Compilers. Yannis Smaragdakis, U. Athens (original slides by Sam

Compilers. Predictive Parsing. Alex Aiken

Syntax Analysis/Parsing. Context-free grammars (CFG s) Context-free grammars vs. Regular Expressions. BNF description of PL/0 syntax

CS502: Compilers & Programming Systems

Syntactic Analysis. Top-Down Parsing

Context-free grammars (CFG s)

CS 406/534 Compiler Construction Parsing Part I

Parsing III. (Top-down parsing: recursive descent & LL(1) )

Syntax Analysis. Martin Sulzmann. Martin Sulzmann Syntax Analysis 1 / 38

Chapter 3. Parsing #1

Parsing #1. Leonidas Fegaras. CSE 5317/4305 L3: Parsing #1 1

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

Compilation Lecture 3: Syntax Analysis: Top-Down parsing. Noam Rinetzky


CMPS Programming Languages. Dr. Chengwei Lei CEECS California State University, Bakersfield

Building a Parser III. CS164 3:30-5:00 TT 10 Evans. Prof. Bodik CS 164 Lecture 6 1

Lexical and Syntax Analysis. Bottom-Up Parsing

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

4. Lexical and Syntax Analysis

Compilerconstructie. najaar Rudy van Vliet kamer 140 Snellius, tel rvvliet(at)liacs(dot)nl. college 3, vrijdag 22 september 2017

Syntax. In Text: Chapter 3

Compiler Design Concepts. Syntax Analysis

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

CS415 Compilers. Syntax Analysis. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

4. Lexical and Syntax Analysis

Sometimes an ambiguous grammar can be rewritten to eliminate the ambiguity.

Theoretical Part. Chapter one:- - What are the Phases of compiler? Answer:

Defining syntax using CFGs

CSE 130 Programming Language Principles & Paradigms Lecture # 5. Chapter 4 Lexical and Syntax Analysis

Note that for recursive descent to work, if A ::= B1 B2 is a grammar rule we need First k (B1) disjoint from First k (B2).

CSCI312 Principles of Programming Languages

CPS 506 Comparative Programming Languages. Syntax Specification

Compiler Design 1. Top-Down Parsing. Goutam Biswas. Lect 5

Prelude COMP 181 Tufts University Computer Science Last time Grammar issues Key structure meaning Tufts University Computer Science

CA Compiler Construction

Ambiguity. Grammar E E + E E * E ( E ) int. The string int * int + int has two parse trees. * int

Derivations vs Parses. Example. Parse Tree. Ambiguity. Different Parse Trees. Context Free Grammars 9/18/2012

8 Parsing. Parsing. Top Down Parsing Methods. Parsing complexity. Top down vs. bottom up parsing. Top down vs. bottom up parsing

Types of parsing. CMSC 430 Lecture 4, Page 1

A simple syntax-directed

A Simple Syntax-Directed Translator

Homework. Lecture 7: Parsers & Lambda Calculus. Rewrite Grammar. Problems

Section A. A grammar that produces more than one parse tree for some sentences is said to be ambiguous.

Compilers: CS31003 Computer Sc & Engg: IIT Kharagpur 1. Top-Down Parsing. Lect 5. Goutam Biswas

EDA180: Compiler Construc6on Context- free grammars. Görel Hedin Revised:

Syntax-Directed Translation. Lecture 14

Introduction to Syntax Analysis

It parses an input string of tokens by tracing out the steps in a leftmost derivation.

LECTURE 7. Lex and Intro to Parsing

Introduction to Syntax Analysis. The Second Phase of Front-End

Lecture 10 Parsing 10.1

Introduction to Bottom-Up Parsing

CIT Lecture 5 Context-Free Grammars and Parsing 4/2/2003 1

LL parsing Nullable, FIRST, and FOLLOW

CSE 3302 Programming Languages Lecture 2: Syntax

ICOM 4036 Spring 2004

Chapter 4. Lexical and Syntax Analysis

Course Overview. Introduction (Chapter 1) Compiler Frontend: Today. Compiler Backend:

CSCI312 Principles of Programming Languages!

Parsing. Lecture 11: Parsing. Recursive Descent Parser. Arithmetic grammar. - drops irrelevant details from parse tree

3. Syntax Analysis. Andrea Polini. Formal Languages and Compilers Master in Computer Science University of Camerino

3. Context-free grammars & parsing

Chapter 3: Describing Syntax and Semantics. Introduction Formal methods of describing syntax (BNF)

LANGUAGE PROCESSORS. Introduction to Language processor:

CS 314 Principles of Programming Languages

Principles of Programming Languages COMP251: Syntax and Grammars

Context-free grammars

Fall Compiler Principles Lecture 2: LL parsing. Roman Manevich Ben-Gurion University of the Negev

EDAN65: Compilers, Lecture 04 Grammar transformations: Eliminating ambiguities, adapting to LL parsing. Görel Hedin Revised:

Introduction to Lexing and Parsing

Transcription:

Lexical and Syntax Analysis Top-Down Parsing

Easy for humans to write and understand String of characters Lexemes identified String of tokens Easy for programs to transform Data structure

Syntax A syntax is a set of rules defining the valid strings of a language, often specified by a context-free grammar. For example, a grammar E for arithmetic expressions: e x y e + e e e e * e ( e )

Derivations A derivation is a proof that some string conforms to a grammar. A leftmost derivation: e e + e x + e x + ( e ) x + ( e * e ) x + ( y * e ) x + ( y * x )

Derivations A rightmost derivation: e e + e e + ( e ) e + ( e * e ) e + ( e * x ) e + ( y * x ) x + ( y * x ) Many ways to derive the same string: many ways to write the same proof.

Parse tree: motivation Also a proof that a given input is valid according to the grammar. But a parse tree: is more concise: we don t write out the sentence every time a non-terminal is expanded. abstracts over the order in which rules are applied.

Parse tree: intuition If non-terminal n has a production n X Y Z where X, Y, and Z are terminals or non-terminals, then a parse tree may have an interior node labelled n with three children labelled X, Y, and Z. n X Y Z

Parse tree: definition A parse tree is a tree in which: the root is labelled by the start symbol; each leaf is labelled by a terminal symbol, or ε; each interior node is labelled by a non-terminal; if n is a non-terminal labelling an interior node whose children are X 1, X 2,, X n then there must exist a production n X 1 X 2 X n.

Example 1 Example input string: x + y * x A resulting parse tree according to grammar E: e e + e x e * e y x

Example 2 The following is not a parse tree according to grammar E. e x + e e y * e x Why? Because e x + e is not a production in grammar E.

Grammar notation Non-terminals are underlined. Rather than writing e x e e + e we may write: e x e + e (Also, symbols and ::= will be used interchangeably.)

Syntax Analysis String of symbols Parse tree A parse tree is: 1. A proof that a given input is valid according to the grammar; 2. A data structure that is convenient for compilers to process. (Syntax analysis may also report that the input string is invalid.)

Ambiguity If there exists more than one parse tree for any string then the grammar is ambiguous. For example, the string x+y*x has two parse trees: e e e + e e * e x e * e e + e x y x x y

Operator precedence Different parse trees often have different meanings, so we usually want unambiguous grammars. Conventionally, * has a higher precedence (binds tighter) than +, so there is only one interpretation of x+y*x, namely x+(y*x).

Operator associativity Even with precedence rules, ambiguity remains, e.g. x-x-x-x. Binary operators are either: left-associative; right-associative; non-associative. Conventionally, - is left-associative, so there is only one interpretation of x-x-x-x, namely ((x-x)-x)-x.

Ambiguity removal Example input: e x y e + e e e e * e ( e ) All operators are left associative, and * binds tighter than + and.

Ambiguity removal Example output: e e + e 1 e e 1 e 1 e 1 e 1 * e 2 e 2 e 2 ( e ) x y Note: ignoring bracketed expressions e 1 disallows + and e 2 disallows +, -, and *

Disallowed parse trees After disambiguation, there are no parse trees corresponding to the following originals: e e e * e e + e e + e x x e - e x y y x LHS of * cannot contain a +. RHS of + cannot contain a -.

Ambiguity removal: step-by-step Given a non-terminal e which involves operators at n levels of precedence: Step 1: introduce n+1 new nonterminals, e 0 e n.

Let op denote an operator with precedence i. Step 2a: replace each production with e e op e e i e i op e i+1 e i+1 if op is left-associative, or e i e i+1 op e i e i+1 if op is right-associative

Step 2b: replace each production with e op e e i op e i e i+1 Step 2c: replace each production e e op with e i e i op e i+1

Construct the precedence table: Operator Precedence +, - 0 * 1 Grammar E after step 2 becomes: e 0 e 0 + e 1 e 0 e 1 e 1 e 1 e 1 * e 2 e 2 e ( e ) x y

Step 3: replace each production with e e n After step 3: e 0 e 0 + e 1 e 0 e 1 e 1 e 1 e 1 * e 2 e 2 e 2 ( e ) x y

Step 4: replace all occurrences of e 0 with e. After step 4: e e + e 1 e e 1 e 1 e 1 e 1 * e 2 e 2 e 2 ( e ) x y

Exercise 1 Consider the following ambiguous grammar for logical propositions. p 0 (Zero) 1 (One) ~ p (Negation) p + p (Disjunction) p * p (Conjunction) Now let + and * be right associative and the operators in increasing order of binding strength be : +, *, ~. Give an unambiguous grammar for logical propositions.

Exercise 2 Which of the following grammars are ambiguous? b 0 b 1 0 1 e + e e e e x s if b then s if b then s else s skip

Homework exercise Consider the following ambiguous grammar G. s if b then s if b then s else s skip Give a unambiguous grammar that accepts the same language as G.

Summary so far Syntax of a language is often specified by a context-free grammar Derivations and parse trees are proofs. Parse trees lead to a concise definition of ambiguity. Construction of unambiguous grammars using rules of precedence and associativity.

PART 2: TOP-DOWN PARSING Recursive-Descent Backtracking Left-Factoring Predictive Parsing Left-Recursion Removal First and Follow Sets Parsing tables and LL(1)

Top-down parsing Top-down: begin with the start symbol and expand non-terminals, succeeding when the input string is matched. A good strategy for writing parsers: 1. Implement a syntax checker to accept or refute input strings. 2. Modify the checker to construct a parse tree straightforward.

RECURSIVE DESCENT A popular top-down parsing technique.

Recursive descent A recursive descent parser consists of a set of functions, one for each non-terminal. The function for non-terminal n returns true if some prefix of the input string can be derived from n, and false otherwise.

Consuming the input We assume a global variable next points to the input string. char* next; Consume c from input if possible. int eat(char c) { if (*next == c) { next++; return 1; } return 0; }

Recursive descent Let parse(x) denote X() if X is a non-terminal eat(x) if X is a terminal For each non-terminal N, introduce: int N() { char* save = next; } for each N X 1 X 2 X n if (parse(x 1 ) && parse(x 2 ) && && parse(x n )) return 1; else next = save; return 0; Backtrack

Exercise 4 Consider the following grammar G with start symbol e. e ( e + e ) ( e * e ) v v x y Using recursive descent, write a syntax checker for grammar G.

Answer (part 1) int e() { char* save = next; if (eat('(') && e() && eat('+') && e() && eat(')')) return 1; else next = save; if (eat('(') && e() && eat('*') && e() && eat(')')) return 1; else next = save; if (v()) return 1; else next = save; return 0; }

Answer (part 2) int v() { char* save = next; if (eat('x')) return 1; else next = save; if (eat('y')) return 1; else next = save; return 0; }

Exercise 5 How many function calls are made by the recursive descent parser to parse the following strings? (x*x) ((x*x)*x) (((x*x)*x)*x) (See animation of backtracking.)

Function calls Answer Number of calls is quadratic in the length of the input string. Input string Length Calls (x*x) 5 21 ((x*x)*x) 9 53 (((x*x)*x)*x) 13 117 Lesson: backtracking expensive! String length

LEFT FACTORING Reducing backtracking!

Left factoring When two productions for a non-terminal share a common prefix, expensive backtracking can be avoided by left-factoring the grammar. Idea: Introduce a new nonterminal that accepts each of the different suffixes.

Example 3 Left-factoring grammar G by introducing non-terminal r: e ( e r v r + e ) * e ) v x y Common prefix Different suffixes

Function calls Effect of left-factoring Number of calls is now linear in the length of input string. Input string Length Calls (x*x) 5 13 ((x*x)*x) 9 22 (((x*x)*x)*x) 13 31 Lesson: left-factoring a grammar reduces backtracking. String length

PREDICTIVE PARSING Eliminating backtracking!

Predictive parsing Idea: know which production of a non-terminal to choose based solely on the next input symbol. Advantage: very efficient since it eliminates all backtracking. Disadvantage: not all grammars can be parsed in this way. (But many useful ones can.)

Running example The following grammar H will be used as a running example to demonstrate predictive parsing. e e + e e * e ( e ) x y Example: x+y*(y+x)

Removing ambiguity Since + and * are left-associative and * binds tighter than +, we can derive an unambiguous variant of H. e e + t t t t * f f f ( e ) x y

Left recursion Problem: left-recursive grammars cause recursive descent parsers to loop forever. int e() { char* save = next; if (e() && eat('+') && t()) return 1; next = save; if (t()) return 1; next = save; Call to self without consuming any input } return 0;

Eliminating left recursion Let α denote any sequence of grammar symbols. n n α Rule 1 n' α n' n α Rule 2 n α n' where α does not begin with n Introduce new production Rule 3 n' ε

Eliminating left recursion Example before: e e + v v v x y and after: e v e' v x y e' ε + v e'

Example 4 Running example, after eliminating left-recursion. e t e' e' + t e' ε t f t' t' * f t' ε f ( e ) x y

first and follow sets Predictive parsers are built using the first and follow sets of each non-terminal in a grammar.

Definition of first sets Let α denote any sequence of grammar symbols. If α can derive a string beginning with terminal a then a first(α). If α can derive ε then ε first(α).

Computing first sets If a is a terminal then a first(a α). The empty string ε first(ε). If X 1 X 2 X n is a sequence of grammar symbols and i a first(x i ) and j < i ε first(x j ) then a first(x 1 X 2 X n ). If n α is a production then first( n ) = first(α).

Exercise 6 Give all members of the sets: first( v ) first( e ) first( v e ) e ( e + e ) ( e * e ) v v x ε

Exercise 7 What are the first sets for each non-terminal in the following grammar. e t e' e' + t e' ε t f t' t' * f t' ε f ( e ) x y

Answer first( f ) = { (, x, y } first( t' ) = { *, ε } first( t ) = { (, x, y } first( e' ) = { +, ε } first( e ) = { (, x, y }

Definition of follow sets Let α and β denote any sequence of grammar symbols. Terminal a follow(n) if the start symbol of the grammar can derive a string of grammar symbols in which a immediately follows n. The set follow(n) never contains ε.

End markers In predictive parsing, it is useful to mark the end of the input string with a $ symbol. ((x*x)*x)$ $ is equivalent to '\0' in C.

Computing follow sets If s is the start symbol of the grammar then $ follow(s). If n α x β then everything in first(β) except ε is in follow(x). If n α x or n α x β and ε first(β) then everything in follow(n) is in follow(x).

Exercise Give all members of the sets: follow( e ) follow( v ) e ( e + e ) ( e * e ) v v x ε

Exercise 8 What are the follow sets for each non-terminal in the following grammar. e t e' e' + t e' ε t f t' t' * f t' ε f ( e ) x y

Answer follow( e' ) = { $, ) } follow( e ) = { $, ) } follow( t' ) = { +, $, ) } follow( t ) = { +, $, ) } follow( f ) = { *, +, ), $ }

Non-Terminals Predictive parsing table For each non-terminal n, a parse table T defines which production of n should be chosen, based on the next input symbol a. Terminals e r v ( +... e ( e r r + e Production

Predictive parsing table for each production n α for each a first(α) add n α to T[n, a] if ε first(α) then for each b follow(n) add n α to T[n, b]

Exercise 9 Construct a predictive parsing table for the following grammar. e t e' e' + t e' ε t f t' t' * f t' ε f ( e ) x y

LL(1) grammars If each cell in the parse table contains at most one entry then the a non-backtracking parser can be constructed and the grammar is said to be LL(1). First L: left-to-right scanning of the input. Second L: a leftmost derivation is constructed. The (1): using one input symbol of look-ahead to decide which grammar production to choose.

Exercise 10 Write a syntax checker for the grammar of Exercise 9, utilising the predictive parsing table. int e() {... } It should return a non-zero value if some prefix of the string pointed to by next conforms to the grammar, otherwise it should return zero.

Answer (part 1) int e() { if (*next == 'x') return t() && e1(); if (*next == 'y') return t() && e1(); if (*next == '(') return t() && e1(); return 0; } int e1() { if (*next == '+') return eat('+') && t() && e1(); if (*next == ')') return 1; if (*next == '\0') return 1; return 0; }

Answer (part 2) int t() { if (*next == 'x') return f() && t1(); if (*next == 'y') return f() && t1(); if (*next == '(') return f() && t1(); return 0; } int t1() { if (*next == '+') return 1; if (*next == '* ) return eat('*') && f() && t1(); if (*next == ')') return 1; if (*next == '\0') return 1; return 0; }

Answer (part 3) int f() { if (*next == 'x') return eat('x'); if (*next == 'y') return eat('y'); if (*next == '(') return eat('(') && e() && eat(')'); return 0; } (Notice how backtracking is not required.)

Predictive parsing algorithm Let s be a stack, initially containing the start symbol of the grammar, and let next point to the input string. while (top(s)!= $) if (top(s) is a terminal) { if (top(s) == *next) { pop(s); next++; } else error(); } else if (T[top(s), *next] == X Y 1 Y n ) { pop(s); push(s, Y n Y 1 ) /* Y 1 on top */ }

Exercise 11 Give the steps that a predictive parser takes to parse the following input. x + x * y For each step (loop iteration), show the input stream, the stack, and the parser action.

Acknowledgements Plus Stanford University lecture notes by Maggie Johnson and Julie Zelenski.

APPENDIX

Context-free grammars Have four components: 1. A set of terminal symbols. 2. A set of non-terminal symbols. 3. A set of productions (or rules) of the form: n X 1 X n where n is a non-terminal and X 1 X n is any sequence of terminals, non-terminals, and ε. 4. The start symbol (one of the non-terminals).

Notation Non-terminals are underlined. Rather than writing e x e e + e we may write: e x e + e (Also, symbols and ::= will be used interchangeably.)

Why context-free? Unrestricted Context Sensitive Context Free Regular Nice balance between expressive power and efficiency of parsing.

Chomsky hierarchy Let t range over terminals, x and z over non-terminals and, β and γ over sequences of terminals, nonterminals, and ε. Grammar Unrestricted Valid productions α β Context-Sensitive α x γ α β γ Context-Free Regular x β x t x t z x ε

Backus-Naur Form BNF is a standard ASCII notation for specification of context-free grammars whose terminals are ASCII characters. For example: <exp> ::= <exp> "+" <exp> <exp> "-" <exp> <var> <var> ::= "x" "y" The BNF notation can itself be specified in BNF.