Lexical Analysis. Lecture 2-4

Similar documents
Lexical Analysis. Chapter 2

Lexical Analysis. Lecture 3-4

Administrivia. Lexical Analysis. Lecture 2-4. Outline. The Structure of a Compiler. Informal sketch of lexical analysis. Issues in lexical analysis

Introduction to Lexical Analysis

Implementation of Lexical Analysis

Implementation of Lexical Analysis

Implementation of Lexical Analysis

Lexical Analysis. Lecture 3. January 10, 2018

Lexical Analysis. Finite Automata

Lexical Analysis. Implementation: Finite Automata

Implementation of Lexical Analysis

Implementation of Lexical Analysis

Implementation of Lexical Analysis

Lexical Analysis. Finite Automata

Implementation of Lexical Analysis. Lecture 4

Introduction to Lexical Analysis

Cunning Plan. Informal Sketch of Lexical Analysis. Issues in Lexical Analysis. Specifying Lexers

Syntactic Analysis. CS345H: Programming Languages. Lecture 3: Lexical Analysis. Outline. Lexical Analysis. What is a Token? Tokens

CSE450. Translation of Programming Languages. Lecture 20: Automata and Regular Expressions

Lexical Analysis. Finite Automata. (Part 2 of 2)

Compiler course. Chapter 3 Lexical Analysis

Kinder, Gentler Nation

Chapter 3 Lexical Analysis

CSE450. Translation of Programming Languages. Automata, Simple Language Design Principles

Administrativia. Extra credit for bugs in project assignments. Building a Scanner. CS164, Fall Recall: The Structure of a Compiler

Lexical Analysis. Dragon Book Chapter 3 Formal Languages Regular Expressions Finite Automata Theory Lexical Analysis using Automata

Formal Languages and Compilers Lecture VI: Lexical Analysis

CSEP 501 Compilers. Languages, Automata, Regular Expressions & Scanners Hal Perkins Winter /8/ Hal Perkins & UW CSE B-1

Compiler phases. Non-tokens

Lexical Analysis. Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast!

Lexical Analysis. COMP 524, Spring 2014 Bryan Ward

Concepts Introduced in Chapter 3. Lexical Analysis. Lexical Analysis Terms. Attributes for Tokens

CS Lecture 2. The Front End. Lecture 2 Lexical Analysis

Regular Expressions. Agenda for Today. Grammar for a Tiny Language. Programming Language Specifications

CSE 413 Programming Languages & Implementation. Hal Perkins Autumn 2012 Grammars, Scanners & Regular Expressions

Compiler Construction LECTURE # 3

Compiler Construction

UNIT -2 LEXICAL ANALYSIS

CSE 413 Programming Languages & Implementation. Hal Perkins Winter 2019 Grammars, Scanners & Regular Expressions

CSE 3302 Programming Languages Lecture 2: Syntax

ECS 120 Lesson 7 Regular Expressions, Pt. 1

David Griol Barres Computer Science Department Carlos III University of Madrid Leganés (Spain)

CS412/413. Introduction to Compilers Tim Teitelbaum. Lecture 2: Lexical Analysis 23 Jan 08

Languages, Automata, Regular Expressions & Scanners. Winter /8/ Hal Perkins & UW CSE B-1

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

CS308 Compiler Principles Lexical Analyzer Li Jiang

Front End: Lexical Analysis. The Structure of a Compiler

CSc 453 Lexical Analysis (Scanning)

Computer Science Department Carlos III University of Madrid Leganés (Spain) David Griol Barres

Compiler Construction D7011E

Structure of Programming Languages Lecture 3

Lexical Analysis. Chapter 1, Section Chapter 3, Section 3.1, 3.3, 3.4, 3.5 JFlex Manual

Formal Languages and Compilers Lecture IV: Regular Languages and Finite. Finite Automata

Lexical Analysis. Introduction

Compiler Construction

CS415 Compilers. Lexical Analysis

CMSC 350: COMPILER DESIGN

Announcements! P1 part 1 due next Tuesday P1 part 2 due next Friday

CSE 401/M501 Compilers

CS164: Programming Assignment 2 Dlex Lexer Generator and Decaf Lexer

Outline CS4120/4121. Compilation in a Nutshell 1. Administration. Introduction to Compilers Andrew Myers. HW1 out later today due next Monday.

The Front End. The purpose of the front end is to deal with the input language. Perform a membership test: code source language?

Introduction to Parsing. Lecture 8

Bottom-Up Parsing. Lecture 11-12

2. Lexical Analysis! Prof. O. Nierstrasz!

Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective

CS 314 Principles of Programming Languages. Lecture 3

Chapter 4. Lexical analysis. Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective

Interpreter. Scanner. Parser. Tree Walker. read. request token. send token. send AST I/O. Console

Zhizheng Zhang. Southeast University

CSC 467 Lecture 3: Regular Expressions

Formal Languages and Grammars. Chapter 2: Sections 2.1 and 2.2

Writing a Lexical Analyzer in Haskell (part II)

Regular Languages. MACM 300 Formal Languages and Automata. Formal Languages: Recap. Regular Languages

Compiler Design. 2. Regular Expressions & Finite State Automata (FSA) Kanat Bolazar January 21, 2010

MIT Specifying Languages with Regular Expressions and Context-Free Grammars

Lexical Analysis. Prof. James L. Frankel Harvard University

1. Lexical Analysis Phase

CS 314 Principles of Programming Languages

Finite Automata Part Three

CS 432 Fall Mike Lam, Professor. Finite Automata Conversions and Lexing

Outline. Limitations of regular languages. Introduction to Parsing. Parser overview. Context-free grammars (CFG s)

CS 403 Compiler Construction Lecture 3 Lexical Analysis [Based on Chapter 1, 2, 3 of Aho2]

Lexical Analyzer Scanner

Outline. 1 Scanning Tokens. 2 Regular Expresssions. 3 Finite State Automata

Week 2: Syntax Specification, Grammars

Lecture 9 CIS 341: COMPILERS

A simple syntax-directed

Lexical Analyzer Scanner

Lexical Analysis 1 / 52

Bottom-Up Parsing. Lecture 11-12

PRINCIPLES OF COMPILER DESIGN UNIT II LEXICAL ANALYSIS 2.1 Lexical Analysis - The Role of the Lexical Analyzer

CSCE 314 Programming Languages

Part 5 Program Analysis Principles and Techniques

2010: Compilers REVIEW: REGULAR EXPRESSIONS HOW TO USE REGULAR EXPRESSIONS

Compiler Construction

Chapter 4. Lexical and Syntax Analysis. Topics. Compilation. Language Implementation. Issues in Lexical and Syntax Analysis.

MIT Specifying Languages with Regular Expressions and Context-Free Grammars. Martin Rinard Massachusetts Institute of Technology

Figure 2.1: Role of Lexical Analyzer

Outline. Limitations of regular languages Parser overview Context-free grammars (CFG s) Derivations Syntax-Directed Translation

Transcription:

Lexical Analysis Lecture 2-4 Notes by G. Necula, with additions by P. Hilfinger Prof. Hilfinger CS 164 Lecture 2 1

Administrivia Moving to 60 Evans on Wednesday HW1 available Pyth manual available on line. Please log into your account and electronically register today. Register your team with make-team. See class announcement page. Project #1 available Friday. Use submit hw1 to submit your homework this week. Section 101 (9AM) is gone. Prof. Hilfinger CS 164 Lecture 2 2

Outline Informal sketch of lexical analysis Identifies tokens in input string Issues in lexical analysis Lookahead Ambiguities Specifying lexers Regular expressions Examples of regular expressions Prof. Hilfinger CS 164 Lecture 2 3

The Structure of a Compiler Source Lexical analysis Tokens Today we start Parsing Optimization Interm. Language Code Gen. Machine Code Prof. Hilfinger CS 164 Lecture 2 4

Lexical Analysis What do we want to do? Example: if (i == j) z = 0; else z = 1; The input is just a sequence of characters: \tif (i == j)\n\t\tz = 0;\n\telse\n\t\tz = 1; Goal: Partition input string into substrings And classify them according to their role Prof. Hilfinger CS 164 Lecture 2 5

What s a Token? Output of lexical analysis is a stream of tokens A token is a syntactic category In English: noun, verb, adjective, In a programming language: Identifier, Integer, Keyword, Whitespace, Parser relies on the token distinctions: E.g., identifiers are treated differently than keywords Prof. Hilfinger CS 164 Lecture 2 6

Tokens Tokens correspond to sets of strings: Identifiers: strings of letters or digits, starting with a letter Integers: non-empty strings of digits Keywords: else or if or begin or Whitespace: non-empty sequences of blanks, newlines, and tabs OpenPars: left-parentheses Prof. Hilfinger CS 164 Lecture 2 7

Lexical Analyzer: Implementation An implementation must do two things: 1. Recognize substrings corresponding to tokens 2. Return: 1. The type or syntactic category of the token, 2. the value or lexeme of the token (the substring itself). Prof. Hilfinger CS 164 Lecture 2 8

Example Our example again: \tif (i == j)\n\t\tz = 0;\n\telse\n\t\tz = 1; Token-lexeme pairs returned by the lexer: (Whitespace, \t ) (Keyword, if ) (OpenPar, ( ) (Identifier, i ) (Relation, == ) (Identifier, j ) Prof. Hilfinger CS 164 Lecture 2 9

Lexical Analyzer: Implementation The lexer usually discards uninteresting tokens that don t contribute to parsing. Examples: Whitespace, Comments Question: What happens if we remove all whitespace and all comments prior to lexing? Prof. Hilfinger CS 164 Lecture 2 10

Lookahead. Two important points: 1. The goal is to partition the string. This is implemented by reading left-to-right, recognizing one token at a time 2. Lookahead may be required to decide where one token ends and the next token begins Even our simple example has lookahead issues i vs. if = vs. == Prof. Hilfinger CS 164 Lecture 2 11

Next We need A way to describe the lexemes of each token A way to resolve ambiguities Is if two variables i and f? Is == two equal signs = =? Prof. Hilfinger CS 164 Lecture 2 12

Regular Languages There are several formalisms for specifying tokens Regular languages are the most popular Simple and useful theory Easy to understand Efficient implementations Prof. Hilfinger CS 164 Lecture 2 13

Languages Def. Let Σ be a set of characters. A language over Σ is a set of strings of characters drawn from Σ (Σ is called the alphabet ) Prof. Hilfinger CS 164 Lecture 2 14

Examples of Languages Alphabet = English characters Language = English sentences Alphabet = ASCII Language = C programs Not every string on English characters is an English sentence Note: ASCII character set is different from English character set Prof. Hilfinger CS 164 Lecture 2 15

Notation Languages are sets of strings. Need some notation for specifying which sets we want For lexical analysis we care about regular languages, which can be described using regular expressions. Prof. Hilfinger CS 164 Lecture 2 16

Regular Expressions and Regular Languages Each regular expression is a notation for a regular language (a set of words) If A is a regular expression then we write L(A) to refer to the language denoted by A Prof. Hilfinger CS 164 Lecture 2 17

Atomic Regular Expressions Single character: c L( c ) = { c } (for any c Σ) Concatenation: AB (where A and B are reg. exp.) L(AB) = { ab a L(A) and b L(B) } Example: L( i f ) = { if } (we will abbreviate i f as if ) Prof. Hilfinger CS 164 Lecture 2 18

Compound Regular Expressions Union L(A B) = L(A) L(B) = { s s L(A) or s L(B) } Examples: if then else = { if, then, else } 0 1 9 = { 0, 1,, 9 } (note the are just an abbreviation) Another example: L(( 0 1 ) ( 0 1 )) = { 00, 01, 10, 11 } Prof. Hilfinger CS 164 Lecture 2 19

More Compound Regular Expressions So far we do not have a notation for infinite languages Iteration: A * L(A * ) = { } L(A) L(AA) L(AAA) Examples: 0 * = {, 0, 00, 000, } 1 0 * = { strings starting with 1 and followed by 0 s } Epsilon: ε L(ε) = { } Prof. Hilfinger CS 164 Lecture 2 20

Example: Keyword Keyword: else or if or begin or else if begin ( else abbreviates e l s e ) Prof. Hilfinger CS 164 Lecture 2 21

Example: Integers Integer: a non-empty string of digits digit = 0 1 2 3 4 5 6 7 8 9 number = digit digit * Abbreviation: A + = A A * Prof. Hilfinger CS 164 Lecture 2 22

Example: Identifier Identifier: strings of letters or digits, starting with a letter letter = A Z a z identifier = letter (letter digit) * Is (letter * digit * ) the same as (letter digit) *? Prof. Hilfinger CS 164 Lecture 2 23

Example: Whitespace Whitespace: a non-empty sequence of blanks, newlines, and tabs ( \t \n ) + (Can you spot a subtle omission?) Prof. Hilfinger CS 164 Lecture 2 24

Example: Phone Numbers Regular expressions are all around you! Consider (510) 643-1481 Σ = { 0, 1, 2, 3,, 9, (, ), - } area = digit 3 exchange = digit 3 phone = digit 4 number = ( area ) exchange - phone Prof. Hilfinger CS 164 Lecture 2 25

Example: Email Addresses Consider necula@cs.berkeley.edu Σ = letters [ {., @ } name = letter + address = name @ name (. name) * Prof. Hilfinger CS 164 Lecture 2 26

Summary Regular expressions describe many useful languages Next: Given a string s and a R.E. R, is s L( R )? But a yes/no answer is not enough! Instead: partition the input into lexemes We will adapt regular expressions to this goal Prof. Hilfinger CS 164 Lecture 2 27

Next: Outline Specifying lexical structure using regular expressions Finite automata Deterministic Finite Automata (DFAs) Non-deterministic Finite Automata (NFAs) Implementation of regular expressions RegExp => NFA => DFA => Tables Prof. Hilfinger CS 164 Lecture 2 28

Regular Expressions => Lexical Spec. (1) 1. Select a set of tokens Number, Keyword, Identifier,... 2. Write a R.E. for the lexemes of each token Number = digit + Keyword = if else Identifier = letter (letter digit)* OpenPar = ( Prof. Hilfinger CS 164 Lecture 2 29

Regular Expressions => Lexical Spec. (2) 3. Construct R, matching all lexemes for all tokens R = Keyword Identifier Number = R 1 R 2 R 3 Facts: If s L(R) then s is a lexeme Furthermore s L(R i ) for some i This i determines the token that is reported Prof. Hilfinger CS 164 Lecture 2 30

Regular Expressions => Lexical Spec. (3) 4. Let the input be x 1 x n (x 1... x n are characters in the language alphabet) For 1 i n check x 1 x i L(R)? 5. It must be that x 1 x i L(R j ) for some i and j 6. Remove x 1 x i from input and go to (4) Prof. Hilfinger CS 164 Lecture 2 31

Lexing Example R = Whitespace Integer Identifier + Parse f+3 +g f matches R, more precisely Identifier + matches R, more precisely + The token-lexeme pairs are (Identifier, f ), ( +, + ), (Integer, 3 ) (Whitespace, ), ( +, + ), (Identifier, g ) We would like to drop the Whitespace tokens after matching Whitespace, continue matching Prof. Hilfinger CS 164 Lecture 2 32

Ambiguities (1) There are ambiguities in the algorithm Example: R = Whitespace Integer Identifier + Parse foo+3 f matches R, more precisely Identifier But also fo matches R, and foo, but not foo+ How much input is used? What if x 1 x i L(R) and also x 1 x K L(R) Maximal munch rule: Pick the longest possible substring that matches R Prof. Hilfinger CS 164 Lecture 2 33

More Ambiguities R = Whitespace new Integer Identifier Parse new foo new matches R, more precisely new but also Identifier, which one do we pick? In general, if x 1 x i L(R j ) and x 1 x i L(R k ) Rule: use rule listed first (j if j < k) We must list new before Identifier Prof. Hilfinger CS 164 Lecture 2 34

Error Handling R = Whitespace Integer Identifier + Parse =56 No prefix matches R: not =, nor =5, nor =56 Problem: Can t just get stuck Solution: Add a rule matching all bad strings; and put it last Lexer tools allow the writing of: R = R 1... R n Error Token Error matches if nothing else matches Prof. Hilfinger CS 164 Lecture 2 35

Summary Regular expressions provide a concise notation for string patterns Use in lexical analysis requires small extensions To resolve ambiguities To handle errors Good algorithms known (next) Require only single pass over the input Few operations per character (table lookup) Prof. Hilfinger CS 164 Lecture 2 36

Finite Automata Regular expressions = specification Finite automata = implementation A finite automaton consists of An input alphabet Σ A set of states S A start state n A set of accepting states F S A set of transitions state input state Prof. Hilfinger CS 164 Lecture 2 37

Finite Automata Transition Is read s 1 a s 2 In state s 1 on input a go to state s 2 If end of input If in accepting state => accept, othewise => reject If no transition possible => reject Prof. Hilfinger CS 164 Lecture 2 38

Finite Automata State Graphs A state The start state An accepting state A transition a Prof. Hilfinger CS 164 Lecture 2 39

A Simple Example A finite automaton that accepts only 1 1 A finite automaton accepts a string if we can follow transitions labeled with the characters in the string from the start to some accepting state Prof. Hilfinger CS 164 Lecture 2 40

Another Simple Example A finite automaton accepting any number of 1 s followed by a single 0 Alphabet: {0,1} 1 0 Check that 1110 is accepted but 110 is not Prof. Hilfinger CS 164 Lecture 2 41

And Another Example Alphabet {0,1} What language does this recognize? 1 0 0 0 1 1 Prof. Hilfinger CS 164 Lecture 2 42

And Another Example Alphabet still { 0, 1 } 1 1 The operation of the automaton is not completely defined by the input On input 11 the automaton could be in either state Prof. Hilfinger CS 164 Lecture 2 43

Epsilon Moves Another kind of transition: ε-moves ε A Machine can move from state A to state B without reading input B Prof. Hilfinger CS 164 Lecture 2 44

Deterministic and Nondeterministic Automata Deterministic Finite Automata (DFA) One transition per input per state No ε-moves Nondeterministic Finite Automata (NFA) Can have multiple transitions for one input in a given state Can have ε-moves Finite automata have finite memory Need only to encode the current state Prof. Hilfinger CS 164 Lecture 2 45

Execution of Finite Automata A DFA can take only one path through the state graph Completely determined by input NFAs can choose Whether to make ε-moves Which of multiple transitions for a single input to take Prof. Hilfinger CS 164 Lecture 2 46

Acceptance of NFAs An NFA can get into multiple states 1 0 1 Input: 0 1 0 1 Rule: NFA accepts if it can get in a final state Prof. Hilfinger CS 164 Lecture 2 47

NFA vs. DFA (1) NFAs and DFAs recognize the same set of languages (regular languages) DFAs are easier to implement There are no choices to consider Prof. Hilfinger CS 164 Lecture 2 48

NFA vs. DFA (2) For a given language the NFA can be simpler than the DFA NFA 1 0 0 DFA 0 1 0 0 0 1 1 DFA can be exponentially larger than NFA Prof. Hilfinger CS 164 Lecture 2 49

Regular Expressions to Finite Automata High-level sketch NFA Regular expressions DFA Lexical Specification Table-driven Implementation of DFA Prof. Hilfinger CS 164 Lecture 2 50

Regular Expressions to NFA (1) For each kind of rexp, define an NFA Notation: NFA for rexp A A For ε For input a ε a Prof. Hilfinger CS 164 Lecture 2 51

Regular Expressions to NFA (2) For AB A ε B For A B ε ε B A ε ε Prof. Hilfinger CS 164 Lecture 2 52

Regular Expressions to NFA (3) For A* ε ε A ε Prof. Hilfinger CS 164 Lecture 2 53

Example of RegExp -> NFA conversion Consider the regular expression (1 0)*1 The NFA is ε A ε B ε ε C D 1 0 E F ε ε G H ε I 1 J ε Prof. Hilfinger CS 164 Lecture 2 54

Next NFA Regular expressions DFA Lexical Specification Table-driven Implementation of DFA Prof. Hilfinger CS 164 Lecture 2 55

NFA to DFA. The Trick Simulate the NFA Each state of resulting DFA = a non-empty subset of states of the NFA Start state = the set of NFA states reachable through ε-moves from NFA start state Add a transition S a S to DFA iff S is the set of NFA states reachable from the states in S after seeing the input a considering ε-moves as well Prof. Hilfinger CS 164 Lecture 2 56

NFA -> DFA Example ε A ε B ε ε C D 1 0 E F ε ε ε 1 G H I J ε ABCDHI 0 1 FGABCDHI 0 1 EJGABCDHI 0 1 Prof. Hilfinger CS 164 Lecture 2 57

NFA to DFA. Remark An NFA may be in many states at any time How many different states? If there are N states, the NFA must be in some subset of those N states How many non-empty subsets are there? 2 N - 1 = finitely many, but exponentially many Prof. Hilfinger CS 164 Lecture 2 58

Implementation A DFA can be implemented by a 2D table T One dimension is states Other dimension is input symbols For every transition S i a S k define T[i,a] = k DFA execution If in state S i and input a, read T[i,a] = k and skip to state S k Very efficient Prof. Hilfinger CS 164 Lecture 2 59

Table Implementation of a DFA S 0 1 T 0 1 U 0 1 S T U 0 T T T 1 U U U Prof. Hilfinger CS 164 Lecture 2 60

Implementation (Cont.) NFA -> DFA conversion is at the heart of tools such as flex or jflex But, DFAs can be huge In practice, flex-like tools trade off speed for space in the choice of NFA and DFA representations Prof. Hilfinger CS 164 Lecture 2 61

Perl s Regular Expressions Some kind of pattern-matching feature now common in programming languages. Perl s is widely copied (cf. Java, Python). Not regular expressions, despite name. E.g., pattern /A (\S+) is a $1/ matches A spade is a spade and A deal is a deal, but not A spade is a shovel But no regular expression recognizes this language! Capturing substrings with ( ) itself is an extension Prof. Hilfinger CS 164 Lecture 2 62

Implementing Perl Patterns (Sketch) Can use NFAs, with some modification Implement an NFA as one would a DFA + use backtracking search to deal with states with nondeterministic choices. Add extra states (with ε transitions) for parentheses. ( state records place in input as side effect. ) state saves string started at matching ( $n matches input with stored value. Backtracking much slower than DFA implementation. Prof. Hilfinger CS 164 Lecture 2 63