CSE 130 Programming Language Principles & Paradigms Lecture # 5. Chapter 4 Lexical and Syntax Analysis

Similar documents
4. Lexical and Syntax Analysis

4. Lexical and Syntax Analysis

Chapter 4. Lexical and Syntax Analysis

10/5/17. Lexical and Syntactic Analysis. Lexical and Syntax Analysis. Tokenizing Source. Scanner. Reasons to Separate Lexical and Syntax Analysis

ICOM 4036 Spring 2004

10/4/18. Lexical and Syntactic Analysis. Lexical and Syntax Analysis. Tokenizing Source. Scanner. Reasons to Separate Lexical and Syntactic Analysis

Programming Language Specification and Translation. ICOM 4036 Fall Lecture 3

The Parsing Problem (cont d) Recursive-Descent Parsing. Recursive-Descent Parsing (cont d) ICOM 4036 Programming Languages. The Complexity of Parsing

Lexical and Syntax Analysis

Chapter 3. Describing Syntax and Semantics ISBN

Chapter 4. Lexical and Syntax Analysis. Topics. Compilation. Language Implementation. Issues in Lexical and Syntax Analysis.

Programming Language Syntax and Analysis

4. LEXICAL AND SYNTAX ANALYSIS

Introduction. Introduction. Introduction. Lexical Analysis. Lexical Analysis 4/2/2019. Chapter 4. Lexical and Syntax Analysis.

Revisit the example. Transformed DFA 10/1/16 A B C D E. Start

CS 230 Programming Languages

Lexical and Syntax Analysis (2)

Syntax. In Text: Chapter 3

Chapter 4: LR Parsing

Context-free grammars

Compiler Construction: Parsing

Syntax Analysis. Amitabha Sanyal. ( as) Department of Computer Science and Engineering, Indian Institute of Technology, Bombay

Parsing. Roadmap. > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing

Top down vs. bottom up parsing

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

CSE P 501 Compilers. LR Parsing Hal Perkins Spring UW CSE P 501 Spring 2018 D-1

Section A. A grammar that produces more than one parse tree for some sentences is said to be ambiguous.

Part III : Parsing. From Regular to Context-Free Grammars. Deriving a Parser from a Context-Free Grammar. Scanners and Parsers.

UNIT III & IV. Bottom up parsing


Bottom-up parsing. Bottom-Up Parsing. Recall. Goal: For a grammar G, withstartsymbols, any string α such that S α is called a sentential form

3. Parsing. Oscar Nierstrasz

Software II: Principles of Programming Languages

CSE 401 Compilers. LR Parsing Hal Perkins Autumn /10/ Hal Perkins & UW CSE D-1

8 Parsing. Parsing. Top Down Parsing Methods. Parsing complexity. Top down vs. bottom up parsing. Top down vs. bottom up parsing

Bottom-Up Parsing. Lecture 11-12

Concepts Introduced in Chapter 4

Formal Languages and Compilers Lecture VII Part 3: Syntactic A

Syntax Analysis Part I

Lexical and Syntax Analysis

Syntax Analysis: Context-free Grammars, Pushdown Automata and Parsing Part - 4. Y.N. Srikant

Wednesday, September 9, 15. Parsers

Parsers. What is a parser. Languages. Agenda. Terminology. Languages. A parser has two jobs:

Compiler Construction 2016/2017 Syntax Analysis

Lecture 7: Deterministic Bottom-Up Parsing

Monday, September 13, Parsers

Derivations vs Parses. Example. Parse Tree. Ambiguity. Different Parse Trees. Context Free Grammars 9/18/2012

Introduction to Lexing and Parsing

Lecture 8: Deterministic Bottom-Up Parsing

Lexical and Syntax Analysis. Top-Down Parsing

CMPS Programming Languages. Dr. Chengwei Lei CEECS California State University, Bakersfield

Syntax. Syntax. We will study three levels of syntax Lexical Defines the rules for tokens: literals, identifiers, etc.

3. Syntax Analysis. Andrea Polini. Formal Languages and Compilers Master in Computer Science University of Camerino

Introduction to Parsing. Lecture 8

Syntax Analysis. Martin Sulzmann. Martin Sulzmann Syntax Analysis 1 / 38

Bottom-up Parser. Jungsik Choi

Building Compilers with Phoenix

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

Bottom-Up Parsing. Lecture 11-12

shift-reduce parsing

Parsing Techniques. CS152. Chris Pollett. Sep. 24, 2008.

Faculty of Electrical Engineering, Mathematics, and Computer Science Delft University of Technology

Introduction to Parsing. Comp 412

CPS 506 Comparative Programming Languages. Syntax Specification

Parsers. Xiaokang Qiu Purdue University. August 31, 2018 ECE 468

Syntax-Directed Translation. Lecture 14

LR Parsing. Leftmost and Rightmost Derivations. Compiler Design CSE 504. Derivations for id + id: T id = id+id. 1 Shift-Reduce Parsing.

Compilers. Yannis Smaragdakis, U. Athens (original slides by Sam

CS 2210 Sample Midterm. 1. Determine if each of the following claims is true (T) or false (F).

CSCI312 Principles of Programming Languages

S Y N T A X A N A L Y S I S LR

Wednesday, August 31, Parsers

Syntactic Analysis. Top-Down Parsing

Lexical and Syntax Analysis. Bottom-Up Parsing

Chapter 3. Syntax - the form or structure of the expressions, statements, and program units

COMPILER CONSTRUCTION LAB 2 THE SYMBOL TABLE. Tutorial 2 LABS. PHASES OF A COMPILER Source Program. Lab 2 Symbol table

CS2210: Compiler Construction Syntax Analysis Syntax Analysis

CS 314 Principles of Programming Languages

LR Parsing Techniques

PART 3 - SYNTAX ANALYSIS. F. Wotawa TU Graz) Compiler Construction Summer term / 309

Parsing. source code. while (k<=n) {sum = sum+k; k=k+1;}

Course Overview. Introduction (Chapter 1) Compiler Frontend: Today. Compiler Backend:

CS1622. Today. A Recursive Descent Parser. Preliminaries. Lecture 9 Parsing (4)

CS 406/534 Compiler Construction Parsing Part I

A programming language requires two major definitions A simple one pass compiler

Outline. Limitations of regular languages. Introduction to Parsing. Parser overview. Context-free grammars (CFG s)

CSCI312 Principles of Programming Languages!

Syntax Analysis, III Comp 412

A left-sentential form is a sentential form that occurs in the leftmost derivation of some sentence.

Implementation of Lexical Analysis

Chapter 4. Syntax - the form or structure of the expressions, statements, and program units

Parsing II Top-down parsing. Comp 412

Parsing Algorithms. Parsing: continued. Top Down Parsing. Predictive Parser. David Notkin Autumn 2008

COP 3402 Systems Software Top Down Parsing (Recursive Descent)

MIDTERM EXAM (Solutions)

Describing Syntax and Semantics

LL(k) Parsing. Predictive Parsers. LL(k) Parser Structure. Sample Parse Table. LL(1) Parsing Algorithm. Push RHS in Reverse Order 10/17/2012

Downloaded from Page 1. LR Parsing

Transcription:

Chapter 4 Lexical and Syntax Analysis

Introduction - Language implementation systems must analyze source code, regardless of the specific implementation approach - Nearly all syntax analysis is based on a formal description of the syntax of the source language nearly always in BNF/EBNF - The syntax analysis portion of a language processor nearly always consists of two parts: 1. A low-level part called a lexical analyzer (lexer) (mathematically, a finite automaton based on a regular grammar) 2. A high-level part called a syntax analyzer, or parser (mathematically, a push-down automaton based on a context-free grammar, or BNF)

Introduction (Continued) - Reasons to use BNF to describe syntax: 1. Provides a clear and concise syntax description 2. The parser can be based directly on the BNF 3. Parsers based on BNF are easy to maintain - Reasons to separate lexical and syntax analysis: 1. Simplicity - less complex approaches can be used for lexical analysis; separating them simplifies the parser 2. Efficiency - separation allows optimization of the lexical analyzer 3. Portability - parts of the lexical analyzer may not be portable, but the parser always is portable

Lexical Analysis - A lexical analyzer is a pattern matcher for character strings - A lexical analyzer is a front-end for the parser - Identifies substrings of the source program that belong together - lexemes - Lexemes match a character pattern, which is associated with a lexical category called a token -For example, sum is a lexeme; its token may be IDENT - The lexical analyzer is usually a function that is called by the parser when it needs the next token (read ahead only 1 item)

Lexical Analysis (continued) - Three approaches to building a lexical analyzer: 1. Write a formal description of the tokens and use a software tool that constructs table-driven lexical analyzers given such a description 2. Design a state diagram that describes the tokens and write a program that implements the state diagram 3. Design a state diagram that describes the tokens and handconstruct a table-driven implementation of the state diagram - The book focuses on approach #2, approach #1 for compiler development is possible using something like Lex/Flex & Yacc/Bison

Lexical Analysis (continued) - State diagram design: - A naïve state diagram would have a transition from every state on every character in the source language unfortunately such a diagram would be very large! - In many cases, transitions can be combined to simplify the state diagram. A few simplifying assumptions follow - When recognizing an identifier, all uppercase and lowercase letters are equivalent - Use a character class that includes all letters - When recognizing an integer literal, all digits are equivalent - use a digit class

Lexical Analysis (continued) - Reserved words and identifiers can be recognized together (rather than having a part of the diagram for each reserved word) - Use a table lookup to determine whether a possible identifier is in fact a reserved word - Convenient utility subprograms we would likely use: 1. getchar - gets the next character of input, puts it in nextchar, determines its class and puts the class in charclass 2. addchar - puts the character from nextchar into the place the lexeme is being accumulated, lexeme 3. lookup - determines whether the string in lexeme is a reserved word (returns a code)

State Diagram Visualization

Lexical Analysis (continued) - Implementation (assume initialization): int lex() { switch (charclass) { case LETTER: addchar(); getchar(); while (charclass == LETTER charclass == DIGIT) { addchar(); getchar(); } return lookup(lexeme); break; case DIGIT: addchar(); getchar(); while (charclass == DIGIT) { addchar(); getchar(); } return INT_LIT; break; } /* End of switch */ } /* End of function lex */

The Parsing Problem - Goals of the parser, given an input program: - Find all syntax errors; For each, produce an appropriate diagnostic message, and recover quickly - Produce the parse tree, or at least a trace of the parse tree, for the program - Two categories of parsers -Top down -produce the parse tree, beginning at the root - Order is that of a leftmost derivation - Bottom up - produce the parse tree, beginning at the leaves - Order is the that of the reverse of a rightmost derivation - Parsers look only one token ahead in the input

The Parsing Problem (Continued) Top-down Parsers - Given a sentential form, xaα, the parser must choose the correct A-rule to get the next sentential form in the leftmost derivation, using only the first token produced by A - The most common top-down parsing algorithms: 1. Recursive descent - a coded implementation 2. LL parsers - table driven implementation

The Parsing Problem (Continued) Bottom-up parsers - Given a right sentential form, α, what substring of α is the righthand side of the rule in the grammar that must be reduced to produce the previous sentential form in the right derivation - The most common bottom-up parsing algorithms are in the LR family (LALR, SLR, canonical LR)

The Parsing Problem (Continued) The Complexity of Parsing - Parsers that works for any unambiguous grammar are complex and inefficient (O(n 3 ), where n is the length of the input) - Compilers use parsers that only work for a subset of all unambiguous grammars, but do it in linear time (O(n), where n is the length of the input)

Recursive-Descent Parsing Recursive Descent Process - There is a subprogram for each nonterminal in the grammar, which can parse sentences that can be generated by that nonterminal - EBNF is ideally suited for being the basis for a recursive-descent parser, because EBNF minimizes the number of nonterminals - A grammar for simple expressions: <expr> <term> {(+ -) <term>} <term> <factor> {(* /) <factor>} <factor> id ( <expr> )

Recursive-Descent Parsing (Continued) - Assume we have a lexical analyzer named lex, which puts the next token code in nexttoken - The coding process when there is only one RHS: - For each terminal symbol in the RHS, compare it with the next input token; if they match, continue, else there is an error - For each nonterminal symbol in the RHS, call its associated parsing subprogram

Recursive-Descent Parsing (Continued) /* Function expr Parses strings in the language generated by the rule: <expr> <term> {(+ -) <term>} */ void expr() { /* Parse the first term */ term(); /* As long as the next token is + or -, call lex to get the next token, and parse the next term */ } while (nexttoken == PLUS_CODE nexttoken == MINUS_CODE){ lex(); term(); }

Recursive-Descent Parsing (Continued) - This particular routine does not detect errors - Convention: Every parsing routine leaves the next token in nexttoken - A nonterminal that has more than one RHS requires an initial process to determine which RHS it is to parse - The correct RHS is chosen on the basis of the next token of input (the lookahead) - The next token is compared with the first token that can be generated by each RHS until a match is found - If no match is found, it is a syntax error We ll see the code in a moment

Recursive-Descent Parsing (Continued) /* Function factor Parses strings in the language generated by the rule: <factor> -> id (<expr>) */ void factor() { /* Determine which RHS */ if (nexttoken) == ID_CODE) /* For the RHS id, just call lex */ lex(); (continued on next slide)

Recursive-Descent Parsing (Continued) /* If the RHS is (<expr>) call lex to pass over the left parenthesis, call expr, and check for the right parenthesis */ else if (nexttoken == LEFT_PAREN_CODE) { lex(); expr(); if (nexttoken == RIGHT_PAREN_CODE) lex(); else error(); } /* End of else if (nexttoken ==... */ } else error(); /* Neither RHS matches */

Recursive-Descent Parsing (Continued) The LL Grammar Class - The Left Recursion Problem - If a grammar has left recursion, either direct or indirect, it cannot be the basis for a top-down parser - A grammar can be modified to remove left recursion

Recursive-Descent Parsing (Continued) The other characteristic of grammars that disallows top-down parsing is the lack of pairwise disjointness - The inability to determine the correct RHS on the basis of one token of lookahead -Def: FIRST(α) = {a α =>* aβ } (If α =>* ε, ε is in FIRST(α)) - Pairwise Disjointness Test: For each nonterminal, A, in the grammar that has more than one RHS, for each pair of rules, A α I and A α j, it must be true that FIRST(α i ) FIRST(α j ) = φ The intersection set must be empty

Recursive-Descent Parsing (Continued) - Examples: A a bb cab A a ab - Left factoring can resolve the problem Replace <variable> identifier identifier [<expression>] with <variable> identifier <new> <new> ε [<expression>] or <variable> identifier [ [<expression>] ] (the outer brackets are metasymbols of EBNF)

Bottom-up Parsing (continued) - Shift-Reduce Algorithms - Reduce is the action of replacing the handle on the top of the parse stack with its corresponding LHS - Shift is the action of moving the next token to the top of the parse stack - Advantages of LR parsers: 1. They will work for nearly all grammars that describe programming languages. 2. They work on a larger class of grammars than other bottom-up algorithms, but are as efficient as any other bottom-up parser. 3. They can detect syntax errors as soon as it is possible. 4. The LR class of grammars is a superset of the class parsable by LL parsers. - LR parsers must be constructed with a tool due to complexity