Lexical Analysis 1 / 52

Size: px
Start display at page:

Transcription

1 Lexical Analysis 1 / 52

2 Outline 1 Scanning Tokens 2 Regular Expresssions 3 Finite State Automata 4 Non-deterministic (NFA) Versus Deterministic Finite State Automata (DFA) 5 Regular Expresssions to NFA 6 NFA to DFA 7 DFA to Minimal DFA 8 JavaCC: a Tool for Generating Scanners 2 / 52

3 Scanning Tokens The first step in compiling a program is to break it into tokens (aka lexemes); for example, given the j-- program package pass ; import java. lang. System ; public class Factorial { // Two methods and a field public static int factorial ( int n) { if (n <= 0) return 1; else return n * factorial ( n - 1); public static void main ( String [] args ) { int x = n; System. out. println (x + "! = " + factorial (x )); static int n = 5; we want to produce the sequence of tokens package, pass, ;,import, java,., lang,., System,;, public,class, Factorial, {,public, static, int,factorial, (,int, n,), {, if, (, n,<=, 0, ),, return, 1,;, else,return, n, *, factorial, (, n, -, 1, ),, ;,, public, static, void, main, (, String, [, ], args, ),, {, int, x, =, n, ;, System,., out,., println, (, x, +, "!=", +, factorial, (, x, ), ),, ;,, static, int, n, =, 5,;, and 3 / 52

4 Scanning Tokens We separate the lexemes into categories; in our example program public, class, static, and void are reserved words Factorial, main, String, args, System, out, and println are all identifiers The string!= is a literal, a string literal in this instance The rest are operators and separators The program that breaks the input stream of characters into tokens is called a lexical analyzer or a scanner A scanner may be hand-crafted or it may be generated automatically from a specification consisting of a sequence of regular expressions A state transition diagram is a natural way of describing a scanner 4 / 52

5 Scanning Tokens A state transition diagram for identifiers and integers and the corresponding code if ( isletter (ch) ch == _ ch == \$ ) { buffer = new StringBuffer (); while ( isletter ( ch) isdigit ( ch) ch == _ ch == \$ ){ buffer. append (ch ); nextch (); return new TokenInfo ( IDENTIFIER, buffer. tostring (), line ); 5 / 52

6 Scanning Tokens else if (ch == 0 ) { nextch (); return new TokenInfo ( INT_LITERAL, " 0", line ); else if ( isdigit (ch )){ buffer = new StringBuffer (); while ( isdigit (ch )) { buffer. append (ch ); nextch (); return new TokenInfo ( INT_LITERAL, buffer. tostring (), line ); 6 / 52

7 Scanning Tokens A state transition diagram for reserved words and the corresponding code 7 / 52

8 Scanning Tokens... else if (ch == n ) { buffer. append (ch ); nextch (); if (ch == e ) { buffer. append (ch ); nextch (); if (ch == w ) { buffer. append (ch ); nextch (); if (! isletter (ch) &&! isdigit (ch) && ch!= _ && ch!= \$ ) { return new TokenInfo ( NEW, line ); else if (ch == u ) { buffer. append (ch ); nextch (); if (ch == l ) { buffer. append (ch ); nextch (); if (ch == l ) { buffer. append (ch ); nextch (); if (! isletter (ch) &&! isdigit (ch) && ch!= _ && ch!= \$ ) { return new TokenInfo ( NULL, line ); 8 / 52

9 Scanning Tokens while ( isletter (ch) isdigit (ch) ch == _ ch == \$ ) { buffer. append (ch ); nextch (); return new TokenInfo ( IDENTIFIER, buffer. tostring (), line ); else... A better approach for recognizing reserved words 9 / 52

10 Scanning Tokens if ( isletter (ch) ch == _ ch == \$ ) { buffer = new StringBuffer (); while ( isletter (ch) isdigit (ch) ch == _ ch == \$ ){ buffer. append (ch ); nextch (); String identifier = buffer. tostring (); if ( reserved. containskey ( identifier )) { return new TokenInfo ( reserved. get ( identifier ), line ); else { return new TokenInfo ( IDENTIFIER, identifier, line ); The above approach relies on a map (hash table), reserved, mapping reserved identifiers to their representations: reserved = new Hashtable < String, Integer >(); reserved. put (" abstract ", ABSTRACT ); reserved. put (" boolean ", BOOLEAN ); reserved. put (" char ", CHAR );... reserved. put (" while ", WHILE ); 10 / 52

11 Scanning Tokens A state transition diagram for separators and operators and the corresponding code 11 / 52

12 Scanning Tokens switch (ch) {... case ; : nextch (); return new TokenInfo ( SEMI, line ); case = : nextch (); if (ch == = ) { nextch (); return new TokenInfo ( EQUAL, line ); else { return new TokenInfo ( ASSIGN, line ); case! : nextch (); return new TokenInfo ( LNOT, line ); case * : nextch (); return new TokenInfo ( STAR, line ); / 52

13 Scanning Tokens A state transition diagram for whitespace and the corresponding code while ( iswhitespace (ch )) { nextch (); 13 / 52

14 Scanning Tokens A state transition diagram for comments and the corresponding code 14 / 52

15 Scanning Tokens boolean morewhitespace = true ; while ( morewhitespace ) { while ( iswhitespace (ch )) { nextch (); if (ch == / ) { nextch (); if (ch == / ) { // CharReader maps all new lines to \ n while (ch!= \n && ch!= EOFCH ) { nextch (); else { reportscannererror (" Operator / is not supported in j - -."); else { morew hitespac e = false ; 15 / 52

16 Regular Expresssions Regular expressions provide a simple notation for describing patterns of characters in a text A regular expression defines a language of strings over an alphabet, and may take one of the following forms 1 If a is in our alphabet, then the regular expression a describes the language L(a) consisting of the string a 2 If r and s are regular expressions then their concatenation rs is also a regular expression describing the language L(rs) of all possible strings obtained by concatenating a string in the language described by r, to a string in the language described by s 3 If r and s are regular expressions then the alternation r s is also a regular expression describing the language L(r s) consisting of all strings described by either r or s 4 If r is a regular expression, the repetition (aka the Kleene closure) r is also a regular expression describing the language L(r ) consisting of strings obtained by concatenating zero or more instances of strings described by r together 5 ɛ is a regular expression describing the language containing only the empty string 6 If r is a regular expression, then (r) is also a regular expression denoting the same language 16 / 52

17 Regular Expresssions For example, given an alphabet {a, b 1 a(a b) denotes the language of non-empty strings of a s and b s, beginning with an a 2 aa ab ba bb denotes the language of all two-symbol strings over the alphabet 3 (a b) ab denotes the language of all strings of a s and b s, ending in ab As another example, in a programming language such as Java 1 Reserved words may be described as abstract boolean char... while 2 Operators may be described as = == >... * 3 Identifiers may be described as ([a-za-z] _ \$)([a-za-z0-9] _ \$)* 17 / 52

18 Finite State Automata For any language described by a regular expression, there is a state transition diagram called Finite State Automaton that can parse strings in the language A Finite State Automaton (FSA) F is a quintuple F = (Σ, S, s 0, M, F ) where Σ is the input alphabet S is a set of states s 0 S is a special start state M is a set of moves or state transitions of the form m(r, a) = s where r, s S, a Σ read as, if one is in state r, and the next input symbol is a, scan the a and move into state s F S is a set of final states 18 / 52

19 Finite State Automata For example, consider the regular expression (a b)a b over the alphabet {a, b that describes the language consisting of all strings starting with either an a or a b, followed by zero or more a s, and ending with a b An FSA F that recognizes the language is F = (Σ, S, s 0, M, F ) where Σ = {a, b, S = {0, 1, 2, s 0 = 0, M = {m(0, a) = 1, m(0, b) = 1, m(1, a) = 1, m(1, b) = 2, F = {2 The corresponding transition diagram is shown below 19 / 52

20 Finite State Automata An FSA recognizes strings in the same way that state transition diagrams do For the previous FSA, given the input sentence baaab and beginning in the start state 0, the following moves are prescribed m(0, b) = 1 = in state 0 we scan a b and go into state 1 m(1, a) = 1 = in state 1 we scan an a and go back into state 1 m(1, a) = 1 = in state 1 we scan an a and go back into state 1 (again) m(1, a) = 1 = in state 1 we scan an a and go back into state 1 (again) m(1, b) = 2 = finally, in state 1 we scan a b and go into the final state 2 20 / 52

21 Non-deterministic (NFA) Versus Deterministic Finite State Automata (DFA) A deterministic finite state automaton (DFA) is an automaton without ɛ-moves, and there is a unique move from any state on an input symbol a, ie, there cannot be two moves m(r, a) = s and m(r, a) = t, where s t A non-deterministic finite state automaton (NFA) is an automaton that allows either of the following More than one move from the same state, on the same input symbol a, ie, m(r, a) = s and m(r, a) = t, where s t An ɛ-move defined on the empty string ɛ, ie, m(r, ɛ) = s, which says we can move from state r to state s without scanning any input symbols 21 / 52

22 Non-deterministic (NFA) Versus Deterministic Finite State Automata (DFA) For example, an NFA that recognizes all strings of a s and b s that begin with an a and end with a b is N = (Σ, S, s 0, M, F ) where Σ = {a, b, S = {0, 1, 2, s 0 = 0, M = {m(0, a) = 1, m(1, a) = 1, m(1, b) = 1, m(1, ɛ) = 0, m(1, b) = 2 and, F = {2 The corresponding transition diagram is shown below An NFA is said to recognize an input string if, starting in the start state, there exists a set of moves based on the input that takes us into one of the final states 22 / 52

23 Regular Expresssions to NFA Given any regular expression r, we can construct (using Thompson s construction procedure) an NFA N that recognizes the same language; ie, L(N) = L(r) (Rule 1) If the regular expression r takes the form of an input symbol, a, then the NFA that recognizes it has two states: a start state and a final state, and a move on symbol a from the start state to the final state (Rule 2) If N r and N s are NFA recognizing the languages described by the regular expressions r and s respectively, then we can create a new NFA recognizing the language described by rs as follows: we define an ɛ-move from the final state of N r to the start state of N s, then choose the start state of N r to be our new start state, and the final state of N s to be our new final state 23 / 52

24 Regular Expresssions to NFA (Rule 3) If N r and N s are NFA recognizing the languages described by the regular expressions r and s respectively, then we can create a new NFA recognizing the language described by r s as follows: we define a new start state, having ɛ-moves to each of the start states of N r and N s, and we define a new final state and add ɛ-moves from each of the final states of N r and N s to this state 24 / 52

25 Regular Expresssions to NFA (Rule 4) If N r is an NFA recognizing that language described by a regular expression r, then we construct a new NFA recognizing r as follows: we add an ɛ-move from N r s final state back to its start state, define a new start state and a new final state, add ɛ-moves from the new start state to both N r s start state and the new final state, and define an ɛ-move from N r s final state to the new final state (Rule 5) If r is ɛ then we just need an ɛ-move from the start state to the final state (Rule 6) If N r is our NFA recognizing the language described by r, then N r also recognizes the language described by (r) 25 / 52

26 Regular Expresssions to NFA As an example, let s construct an NFA for the regular expression (a b)a b, which has the following syntactic structure We start with the first a and b; the automata recognizing these are easy enough to construct using Rule 1 26 / 52

27 Regular Expresssions to NFA We then put them together using Rule 3 to produce an NFA recognizing a b The NFA recognizing (a b) is the same as that recognizing a b, by Rule 6 An NFA recognizing the second instance of a is simple enough, by Rule 1 again The NFA recognizing a can be constructed from the NFA for a, by applying Rule 4 27 / 52

28 Regular Expresssions to NFA We then apply Rule 2 to construct an NFA recognizing the concatenation (a b)a 28 / 52

29 Regular Expresssions to NFA An NFA recognizing the second instance of b is simple enough, by Rule 1 again Finally, we can apply Rule 2 again to produce an NFA recognizing (a b)a b 29 / 52

30 NFA to DFA For any NFA, there is an equivalent DFA that can be constructed using the powerset (or subset) construction procedure The DFA that we construct is always in a state that simulates all the possible states that the NFA could possibly be in having scanned the same portion of the input The computation of all states reachable from a given state s based on ɛ-moves alone is called taking the ɛ-closure of that state The ɛ-closure(s) for a state s includes s and all states reachable from s using ɛ-moves alone, ie, ɛ-closure(s) = {s {r S there is a path of only ɛ-moves from s to r The ɛ-closure(s) for a set of states S includes S and all states reachable from any state s S using ɛ-moves alone 30 / 52

31 NFA to DFA Algorithm ɛ-closure(s) for a Set of States S Input: a set of states, S Output: ɛ-closure(s) Stack P.addAll(S) // a stack containing all states in S Set C.addAll(S) // the closure initially contains the states in S while! P.empty() do s P.pop() for r in m(s, ɛ) do // m(s, ɛ) is a set of states if r / C then P.push(r) C.add(r) end if end for end while return C Algorithm ɛ-closure(s) for a State s Input: a state, s Output: ɛ-closure(s) Set S.add(s) // S = {s return ɛ-closure(s) 31 / 52

32 NFA to DFA NFA for the regular expression (a b)a b from before In the corresponding DFA, the start state s 0 = ɛ-closure(0) = {0, 1, 3 From s 0, scanning the symbol a, we want to go into a state that reflects all the states we could be in after scanning an a in the NFA, which are 2, and via ɛ-moves 5, 6, 7, 9 and 10 m(s 0, a) = s 1, where s 1 = ɛ-closure(2) = {2, 5, 6, 7, 9, 10 Similarly, scanning a symbol b in state s 0, we get m(s 0, b) = s 2, where s 2 = ɛ-closure(4) = {4, 5, 6, 7, 9, / 52

33 NFA to DFA From state s 1, scanning an a, we have to consider where we could have gone from the states {2, 5, 6, 7, 9, 10 in the NFA; from state 7, scanning an a, we go into state 8, and then (by ɛ-moves) 7, 9, and 10 m(s 1, a) = s 3, where s 3 = ɛ-closure(8) = {7, 8, 9, 10 Now, from state s 1, scanning b, we have m(s 1, b) = s 4, where s 4 = ɛ-closure(11) = {11 33 / 52

34 NFA to DFA From state s 2, scanning an a takes us into a state reflecting 8, and then (by ɛ-moves) 7, 9 and 10, generating state {7, 8, 9, 10, which is s 3 Scanning a b from state s 2 takes us into a state reflecting 11, generating state {11, which is s 4 m(s 2, a) = s 3 m(s 2, b) = s 4 From state s 3, scanning an a takes us back into s 3, and scanning a b takes us into s 4 m(s 3, a) = s 3 m(s 3, b) = s 4 There are no moves at all out of state s 4, so we have found all of our transitions and all of our states Any state reflecting a final state in the NFA is a final state in the DFA; in our example, only s 4 is a final state because it contains (the final) state 11 from the original NFA 34 / 52

35 NFA to DFA The DFA derived from our NFA for the regular expression (a b)a b is shown below 35 / 52

36 NFA to DFA Algorithm NFA to DFA Construction Input: an NFA, N = (Σ, S, s 0, M, F ) Output: DFA, D = (Σ, S D, s D0, M D, F D ) Set s D0 ɛ-closure(s 0 ) Set S D.add(s D0 ) Moves M D Stack stk.push(s D0 ) i 0 while!stk.empty() do t stk.pop() for a in Σ do s Di+1 ɛ-closure(m(t, a)) if s Di+1 { then if s Di+1 / S D then S D.add(s Di+1 ) // We have a new state stk.push(s Di+1 ) i i + 1 M D.add(m D (t, a) = i) else if j, s j S D and s Di+1 = s j then M D.add(m D (t, a) = j) // In the case the state already exists end if end if end for end while 36 / 52

37 NFA to DFA Algorithm NFA to DFA Construction (contd.) Set F D for s D in S D do for s in s D do if s F then F D.add(s D ) end if end for end for return D = (Σ, S D, s D0, M D, F D ) 37 / 52

38 DFA to Minimal DFA How do we come up with a smaller DFA that recognizes the same language? Given an input string in our language, there must be a sequence of moves taking us from the start state to one of the final states Given an input string that is not in our language, we must get stuck with no move to take or end up in a non-final state We must combine as many states together as we can, so that the states in our new DFA are partitions of the states in the original (perhaps larger) DFA A good strategy is to start with just one or two partitions of the states, and then split states when it is necessary to produce the necessary DFA An obvious first partition has two sets: the set of final states and the set of non-final states; the latter could be empty, leaving us with a single partition containing all states 38 / 52

39 DFA to Minimal DFA For example, consider the DFA for (a b)a b, partitioned as follows The two states in this new DFA consist of the start state, {0, 1, 2, 3 and the final state {4 We must make sure that from a particular partition, each input symbol must move us to an identical partition 39 / 52

40 DFA to Minimal DFA Beginning in any state in the partition {0, 1, 2, 3, an a takes us to one of the states in {0, 1, 2, 3 m(0, a) = 1 m(1, a) = 3 m(2, a) = 3 m(3, a) = 3 So, our partition {0, 1, 2, 3 is fine so far as moves on the symbol a are concerned For the symbol b, m(0, b) = 2 m(1, b) = 4 m(2, b) = 4 m(3, b) = 4 So we must split the partition {0, 1, 2, 3 into two new partitions, {0 and {1, 2, 3 If we are in state s, and for an input symbol a in our alphabet there is no defined move m(s, a) = t, we invent a special dead state d, so that we can say m(s, a) = d 40 / 52

41 DFA to Minimal DFA We are left with a partition into three sets: {0, {1, 2, 3 and {4, as shown below 41 / 52

42 DFA to Minimal DFA We need not worry about {0 and {4 as they contain just one state and so correspond to (those) states in the original machine We consider {1, 2, 3 to see if it is necessary to split it m(1, a) = 3 m(2, a) = 3 m(3, a) = 3 m(1, b) = 4 m(2, b) = 4 m(3, b) = 4 Thus, there is no further state splitting to be done, and we are left with the following smaller DFA 42 / 52

43 DFA to Minimal DFA Algorithm Minimizing a DFA Input: a DFA, D = (Σ, S, s 0, M, F ) Output: a partition of S Set partition {S F, F // start with two sets: the non-final states and the final states // Splitting the states while splitting occurs do for Set set in partition do if set.size() > 1 then for Symbol a in Σ do // Determine if moves from this state force a split State s a state chosen from set targetset the set in the partition containing m(s, a) Set set1 {states s from set S, such that m(s, a) targetset Set set2 {states s from set S, such that m(s, a) / targetset if set2 { then // Yes, split the states. replace set in partition by set1 and set2 and break out of the for-loop to continue with the next set in the partition end if end for end if end for end while 43 / 52

44 DFA to Minimal DFA Let us run through another example, starting from a regular expression, producing an NFA, then a DFA, and finally a minimal DFA Consider the regular expression (a b) baa having the following syntactic structure 44 / 52

45 DFA to Minimal DFA We apply the Thompson s construction procedure to produce the NFA shown below 45 / 52

46 DFA to Minimal DFA Using the powerset construction method, we derive a DFA having the following states s 0 = {0, 1, 2, 4, 7, 8 m(s 0, a) : {1, 2, 3, 4, 6, 7, 8 = s 1 m(s 0, b) : {1, 2, 4, 5, 6, 7, 8, 9, 10 = s 2 m(s 1, a) : {1, 2, 3, 4, 6, 7, 8 = s 1 m(s 1, b) : {1, 2, 4, 5, 6, 7, 8, 9, 10 = s 2 m(s 2, a) : {1, 2, 3, 4, 6, 7, 8, 11, 12 = s 3 m(s 2, b) : {1, 2, 4, 5, 6, 7, 8, 9, 10 = s 2 m(s 3, a) : {1, 2, 3, 4, 6, 7, 8, 13 = s 4 m(s 3, b) : {1, 2, 4, 5, 6, 7, 8, 9, 10 = s 2 m(s 4, a) : {1, 2, 3, 4, 6, 7, 8 = s 1 m(s 4, b) : {1, 2, 4, 5, 6, 7, 8, 9, 10 = s 2 46 / 52

47 DFA to Minimal DFA The DFA itself is shown below 47 / 52

48 DFA to Minimal DFA Finally, we use partitioning to produce the minimal DFA shown below 48 / 52

49 DFA to Minimal DFA We re-number the states to produce the equivalent DFA shown below 49 / 52

50 JavaCC: a Tool for Generating Scanners JavaCC (the CC stands for compiler-compiler) is a tool for generating lexical analyzers from regular expressions and parsers from context-free grammars A lexical grammar specification consists a set of regular expressions and a set of lexical states; from any particular state, only certain regular expressions may be matched in scanning the input There is a standard DEFAULT state, in which scanning generally begins; one may specify additional states as required Scanning a token proceeds by considering all regular expressions in the current state and choosing the one which consumes the greatest number of input characters After a match, one can specify a state in which the scanner should go into; otherwise the scanner stays in the current state There are four kinds of regular expressions, determining what happens when the regular expression has been matched 1 SKIP: throws away the matched string 2 MORE: continues to the next state, taking the matched string along 3 TOKEN: creates a token from the matched string and returns it to the parser (or any caller) 4 SPECIAL_TOKEN: creates a special token that does not participate in the parsing 50 / 52

51 JavaCC: a Tool for Generating Scanners For example, a SKIP can be used for ignoring white space SKIP : {" " "\ t " "\ n " "\ r " "\ f" We can deal with single-line comments with the following regular expressions MORE : { "//": IN_SINGLE_LINE_COMMENT < IN_SINGLE_LINE_COMMENT > SPECIAL_TOKEN : { < SINGLE_LINE_COMMENT : "\n " "\ r " "\ r\n" > : DEFAULT < IN_SINGLE_LINE_COMMENT > MORE : { < ~[] > An alternative regular expression dealing with single-line comments SPECIAL_TOKEN : { < SINGLE_LINE_COMMENT : "//" (~["\ n","\r "])* ("\ n " "\ r " "\ r\n") > Reserved words and symbols are specified by simply spelling them out; for example TOKEN : { < ABSTRACT : " abstract " > < BOOLEAN : " boolean " >... < COMMA : "," > < DOT : "." > 51 / 52

52 JavaCC: a Tool for Generating Scanners A token for scanning identifiers TOKEN : { < IDENTIFIER : (< LETTER > " _ " " \$") (< LETTER > < DIGIT > " _ " " \$ ")* > < # LETTER : ["a"-"z","a"-"z"] > < # DIGIT : ["0" -"9"] > A token for scanning literals TOKEN : { < INT_LITERAL : ("0" < NON_ZERO_DIGIT > ( < DIGIT >)*) > < # NON_ZERO_DIGIT : ["1" -"9"] > < CHAR_LITERAL : " " (<ESC > ~[" ","\\","\ n","\r"]) " " > < STRING_LITERAL : "\"" (<ESC > ~["\"","\\","\ n","\r "])* "\"" > < # ESC : "\\" ["n","t","b","r","f ","\\"," ","\""] > JavaCC takes a specification of the lexical syntax and produces several Java files, one of which is TokenManager.java, a program that implements a state machine; this is our scanner The lexical specification for j-- is contained in \$j/j--/src/jminusminus/j--.jj 52 / 52

Outline. 1 Scanning Tokens. 2 Regular Expresssions. 3 Finite State Automata

Outline 1 2 Regular Expresssions Lexical Analysis 3 Finite State Automata 4 Non-deterministic (NFA) Versus Deterministic Finite State Automata (DFA) 5 Regular Expresssions to NFA 6 NFA to DFA 7 8 JavaCC:

Lexical Analysis. Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast!

Lexical Analysis Lexical analysis is the first phase of compilation: The file is converted from ASCII to tokens. It must be fast! Compiler Passes Analysis of input program (front-end) character stream

Formal Languages and Compilers Lecture VI: Lexical Analysis

Formal Languages and Compilers Lecture VI: Lexical Analysis Free University of Bozen-Bolzano Faculty of Computer Science POS Building, Room: 2.03 artale@inf.unibz.it http://www.inf.unibz.it/ artale/ Formal

Computer Science Department Carlos III University of Madrid Leganés (Spain) David Griol Barres

Computer Science Department Carlos III University of Madrid Leganés (Spain) David Griol Barres dgriol@inf.uc3m.es Introduction: Definitions Lexical analysis or scanning: To read from left-to-right a source

David Griol Barres Computer Science Department Carlos III University of Madrid Leganés (Spain)

David Griol Barres dgriol@inf.uc3m.es Computer Science Department Carlos III University of Madrid Leganés (Spain) OUTLINE Introduction: Definitions The role of the Lexical Analyzer Scanner Implementation

Zhizheng Zhang. Southeast University

Zhizheng Zhang Southeast University 2016/10/5 Lexical Analysis 1 1. The Role of Lexical Analyzer 2016/10/5 Lexical Analysis 2 2016/10/5 Lexical Analysis 3 Example. position = initial + rate * 60 2016/10/5

CS 314 Principles of Programming Languages. Lecture 3

CS 314 Principles of Programming Languages Lecture 3 Zheng Zhang Department of Computer Science Rutgers University Wednesday 14 th September, 2016 Zheng Zhang 1 CS@Rutgers University Class Information

2010: Compilers REVIEW: REGULAR EXPRESSIONS HOW TO USE REGULAR EXPRESSIONS

2010: Compilers Lexical Analysis: Finite State Automata Dr. Licia Capra UCL/CS REVIEW: REGULAR EXPRESSIONS a Character in A Empty string R S Alternation (either R or S) RS Concatenation (R followed by

Announcements! P1 part 1 due next Tuesday P1 part 2 due next Friday

Announcements! P1 part 1 due next Tuesday P1 part 2 due next Friday 1 Finite-state machines CS 536 Last time! A compiler is a recognizer of language S (Source) a translator from S to T (Target) a program

CSEP 501 Compilers. Languages, Automata, Regular Expressions & Scanners Hal Perkins Winter /8/ Hal Perkins & UW CSE B-1

CSEP 501 Compilers Languages, Automata, Regular Expressions & Scanners Hal Perkins Winter 2008 1/8/2008 2002-08 Hal Perkins & UW CSE B-1 Agenda Basic concepts of formal grammars (review) Regular expressions

Figure 2.1: Role of Lexical Analyzer

Chapter 2 Lexical Analysis Lexical analysis or scanning is the process which reads the stream of characters making up the source program from left-to-right and groups them into tokens. The lexical analyzer

Compiler phases. Non-tokens

Compiler phases Compiler Construction Scanning Lexical Analysis source code scanner tokens regular expressions lexical analysis Lennart Andersson parser context free grammar Revision 2011 01 21 parse tree

Lexical Analysis. Introduction

Lexical Analysis Introduction Copyright 2015, Pedro C. Diniz, all rights reserved. Students enrolled in the Compilers class at the University of Southern California have explicit permission to make copies

Concepts Introduced in Chapter 3. Lexical Analysis. Lexical Analysis Terms. Attributes for Tokens

Concepts Introduced in Chapter 3 Lexical Analysis Regular Expressions (REs) Nondeterministic Finite Automata (NFA) Converting an RE to an NFA Deterministic Finite Automatic (DFA) Lexical Analysis Why separate

Dr. D.M. Akbar Hussain

1 2 Compiler Construction F6S Lecture - 2 1 3 4 Compiler Construction F6S Lecture - 2 2 5 #include.. #include main() { char in; in = getch ( ); if ( isalpha (in) ) in = getch ( ); else error (); while

Lexical Analysis. Chapter 2

Lexical Analysis Chapter 2 1 Outline Informal sketch of lexical analysis Identifies tokens in input string Issues in lexical analysis Lookahead Ambiguities Specifying lexers Regular expressions Examples

Lexical Analyzer Scanner

Lexical Analyzer Scanner ASU Textbook Chapter 3.1, 3.3, 3.4, 3.6, 3.7, 3.5 Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Main tasks Read the input characters and produce

CSc 453 Lexical Analysis (Scanning)

CSc 453 Lexical Analysis (Scanning) Saumya Debray The University of Arizona Tucson Overview source program lexical analyzer (scanner) tokens syntax analyzer (parser) symbol table manager Main task: to

Lexical Analysis. Implementation: Finite Automata

Lexical Analysis Implementation: Finite Automata Outline Specifying lexical structure using regular expressions Finite automata Deterministic Finite Automata (DFAs) Non-deterministic Finite Automata (NFAs)

Lexical Analyzer Scanner

Lexical Analyzer Scanner ASU Textbook Chapter 3.1, 3.3, 3.4, 3.6, 3.7, 3.5 Tsan-sheng Hsu tshsu@iis.sinica.edu.tw http://www.iis.sinica.edu.tw/~tshsu 1 Main tasks Read the input characters and produce

Implementation of Lexical Analysis

Implementation of Lexical Analysis Outline Specifying lexical structure using regular expressions Finite automata Deterministic Finite Automata (DFAs) Non-deterministic Finite Automata (NFAs) Implementation

Compiler Construction

Compiler Construction Thomas Noll Software Modeling and Verification Group RWTH Aachen University https://moves.rwth-aachen.de/teaching/ss-16/cc/ Conceptual Structure of a Compiler Source code x1 := y2

Chapter 3 Lexical Analysis

Chapter 3 Lexical Analysis Outline Role of lexical analyzer Specification of tokens Recognition of tokens Lexical analyzer generator Finite automata Design of lexical analyzer generator The role of lexical

Introduction to Lexical Analysis

Introduction to Lexical Analysis Outline Informal sketch of lexical analysis Identifies tokens in input string Issues in lexical analysis Lookahead Ambiguities Specifying lexical analyzers (lexers) Regular

The Front End. The purpose of the front end is to deal with the input language. Perform a membership test: code source language?

The Front End Source code Front End IR Back End Machine code Errors The purpose of the front end is to deal with the input language Perform a membership test: code source language? Is the program well-formed

COMP-421 Compiler Design. Presented by Dr Ioanna Dionysiou

COMP-421 Compiler Design Presented by Dr Ioanna Dionysiou Administrative! [ALSU03] Chapter 3 - Lexical Analysis Sections 3.1-3.4, 3.6-3.7! Reading for next time [ALSU03] Chapter 3 Copyright (c) 2010 Ioanna

Formal Languages and Compilers Lecture IV: Regular Languages and Finite. Finite Automata

Formal Languages and Compilers Lecture IV: Regular Languages and Finite Automata Free University of Bozen-Bolzano Faculty of Computer Science POS Building, Room: 2.03 artale@inf.unibz.it http://www.inf.unibz.it/

Lexical Analysis. Dragon Book Chapter 3 Formal Languages Regular Expressions Finite Automata Theory Lexical Analysis using Automata

Lexical Analysis Dragon Book Chapter 3 Formal Languages Regular Expressions Finite Automata Theory Lexical Analysis using Automata Phase Ordering of Front-Ends Lexical analysis (lexer) Break input string

Lexical Analysis. Lecture 2-4

Lexical Analysis Lecture 2-4 Notes by G. Necula, with additions by P. Hilfinger Prof. Hilfinger CS 164 Lecture 2 1 Administrivia Moving to 60 Evans on Wednesday HW1 available Pyth manual available on line.

Implementation of Lexical Analysis

Implementation of Lexical Analysis Outline Specifying lexical structure using regular expressions Finite automata Deterministic Finite Automata (DFAs) Non-deterministic Finite Automata (NFAs) Implementation

Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective

Concepts Lexical scanning Regular expressions DFAs and FSAs Lex CMSC 331, Some material 1998 by Addison Wesley Longman, Inc. 1 CMSC 331, Some material 1998 by Addison Wesley Longman, Inc. 2 Lexical analysis

CS 314 Principles of Programming Languages

CS 314 Principles of Programming Languages Lecture 2: Syntax Analysis Zheng (Eddy) Zhang Rutgers University January 22, 2018 Announcement First recitation starts this Wednesday Homework 1 will be release

Regular Expressions. Agenda for Today. Grammar for a Tiny Language. Programming Language Specifications

Agenda for Today Regular Expressions CSE 413, Autumn 2005 Programming Languages Basic concepts of formal grammars Regular expressions Lexical specification of programming languages Using finite automata

Compiler course. Chapter 3 Lexical Analysis

Compiler course Chapter 3 Lexical Analysis 1 A. A. Pourhaji Kazem, Spring 2009 Outline Role of lexical analyzer Specification of tokens Recognition of tokens Lexical analyzer generator Finite automata

Chapter 4. Lexical analysis. Concepts. Lexical scanning Regular expressions DFAs and FSAs Lex. Lexical analysis in perspective

Chapter 4 Lexical analysis Lexical scanning Regular expressions DFAs and FSAs Lex Concepts CMSC 331, Some material 1998 by Addison Wesley Longman, Inc. 1 CMSC 331, Some material 1998 by Addison Wesley

CS Lecture 2. The Front End. Lecture 2 Lexical Analysis

CS 1622 Lecture 2 Lexical Analysis CS 1622 Lecture 2 1 Lecture 2 Review of last lecture and finish up overview The first compiler phase: lexical analysis Reading: Chapter 2 in text (by 1/18) CS 1622 Lecture

UNIT -2 LEXICAL ANALYSIS

OVER VIEW OF LEXICAL ANALYSIS UNIT -2 LEXICAL ANALYSIS o To identify the tokens we need some method of describing the possible tokens that can appear in the input stream. For this purpose we introduce

Lexical Analysis. COMP 524, Spring 2014 Bryan Ward

Lexical Analysis COMP 524, Spring 2014 Bryan Ward Based in part on slides and notes by J. Erickson, S. Krishnan, B. Brandenburg, S. Olivier, A. Block and others The Big Picture Character Stream Scanner

Lecture 3: Lexical Analysis

Lecture 3: Lexical Analysis COMP 524 Programming Language Concepts tephen Olivier January 2, 29 Based on notes by A. Block, N. Fisher, F. Hernandez-Campos, J. Prins and D. totts Goal of Lecture Character

Lexical Analysis. Sukree Sinthupinyo July Chulalongkorn University

Sukree Sinthupinyo 1 1 Department of Computer Engineering Chulalongkorn University 14 July 2012 Outline Introduction 1 Introduction 2 3 4 Transition Diagrams Learning Objectives Understand definition of

CS412/413. Introduction to Compilers Tim Teitelbaum. Lecture 2: Lexical Analysis 23 Jan 08

CS412/413 Introduction to Compilers Tim Teitelbaum Lecture 2: Lexical Analysis 23 Jan 08 Outline Review compiler structure What is lexical analysis? Writing a lexer Specifying tokens: regular expressions

Lexical Analysis. Finite Automata

#1 Lexical Analysis Finite Automata Cool Demo? (Part 1 of 2) #2 Cunning Plan Informal Sketch of Lexical Analysis LA identifies tokens from input string lexer : (char list) (token list) Issues in Lexical

DVA337 HT17 - LECTURE 4. Languages and regular expressions

DVA337 HT17 - LECTURE 4 Languages and regular expressions 1 SO FAR 2 TODAY Formal definition of languages in terms of strings Operations on strings and languages Definition of regular expressions Meaning

Part 5 Program Analysis Principles and Techniques

1 Part 5 Program Analysis Principles and Techniques Front end 2 source code scanner tokens parser il errors Responsibilities: Recognize legal programs Report errors Produce il Preliminary storage map Shape

CSE 413 Programming Languages & Implementation. Hal Perkins Autumn 2012 Grammars, Scanners & Regular Expressions

CSE 413 Programming Languages & Implementation Hal Perkins Autumn 2012 Grammars, Scanners & Regular Expressions 1 Agenda Overview of language recognizers Basic concepts of formal grammars Scanner Theory

Lexical Analysis. Lecture 3-4

Lexical Analysis Lecture 3-4 Notes by G. Necula, with additions by P. Hilfinger Prof. Hilfinger CS 164 Lecture 3-4 1 Administrivia I suggest you start looking at Python (see link on class home page). Please

Lexical Analysis. Lecture 3. January 10, 2018

Lexical Analysis Lecture 3 January 10, 2018 Announcements PA1c due tonight at 11:50pm! Don t forget about PA1, the Cool implementation! Use Monday s lecture, the video guides and Cool examples if you re

CS415 Compilers. Lexical Analysis

CS415 Compilers Lexical Analysis These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University Lecture 7 1 Announcements First project and second homework

2. Lexical Analysis! Prof. O. Nierstrasz!

2. Lexical Analysis! Prof. O. Nierstrasz! Thanks to Jens Palsberg and Tony Hosking for their kind permission to reuse and adapt the CS132 and CS502 lecture notes.! http://www.cs.ucla.edu/~palsberg/! http://www.cs.purdue.edu/homes/hosking/!

CS321 Languages and Compiler Design I. Winter 2012 Lecture 4

CS321 Languages and Compiler Design I Winter 2012 Lecture 4 1 LEXICAL ANALYSIS Convert source file characters into token stream. Remove content-free characters (comments, whitespace,...) Detect lexical

CSE302: Compiler Design

CSE302: Compiler Design Instructor: Dr. Liang Cheng Department of Computer Science and Engineering P.C. Rossin College of Engineering & Applied Science Lehigh University February 01, 2007 Outline Recap

Interpreter. Scanner. Parser. Tree Walker. read. request token. send token. send AST I/O. Console

Scanning 1 read Interpreter Scanner request token Parser send token Console I/O send AST Tree Walker 2 Scanner This process is known as: Scanning, lexing (lexical analysis), and tokenizing This is the

Compiling Regular Expressions COMP360

Compiling Regular Expressions COMP360 Logic is the beginning of wisdom, not the end. Leonard Nimoy Compiler s Purpose The compiler converts the program source code into a form that can be executed by the

Lexical Analysis (ASU Ch 3, Fig 3.1)

Lexical Analysis (ASU Ch 3, Fig 3.1) Implementation by hand automatically ((F)Lex) Lex generates a finite automaton recogniser uses regular expressions Tasks remove white space (ws) display source program

Lexical Analysis. Finite Automata

#1 Lexical Analysis Finite Automata Cool Demo? (Part 1 of 2) #2 Cunning Plan Informal Sketch of Lexical Analysis LA identifies tokens from input string lexer : (char list) (token list) Issues in Lexical

Part II : Lexical Analysis

Part II : Lexical Analysis Regular Languages Translation from regular languages to program code A grammar for JO Context-free Grammar of JO Assignment 1 Martin Odersky, LAMP/DI 1 Regular Languages Definition

Lexical Analysis. Chapter 1, Section Chapter 3, Section 3.1, 3.3, 3.4, 3.5 JFlex Manual

Lexical Analysis Chapter 1, Section 1.2.1 Chapter 3, Section 3.1, 3.3, 3.4, 3.5 JFlex Manual Inside the Compiler: Front End Lexical analyzer (aka scanner) Converts ASCII or Unicode to a stream of tokens

CS164: Programming Assignment 2 Dlex Lexer Generator and Decaf Lexer

CS164: Programming Assignment 2 Dlex Lexer Generator and Decaf Lexer Assigned: Thursday, September 16, 2004 Due: Tuesday, September 28, 2004, at 11:59pm September 16, 2004 1 Introduction Overview In this

CSE 413 Programming Languages & Implementation. Hal Perkins Winter 2019 Grammars, Scanners & Regular Expressions

CSE 413 Programming Languages & Implementation Hal Perkins Winter 2019 Grammars, Scanners & Regular Expressions 1 Agenda Overview of language recognizers Basic concepts of formal grammars Scanner Theory

Automated Tools. The Compilation Task. Automated? Automated? Easier ways to create parsers. The final stages of compilation are language dependant

Automated Tools Easier ways to create parsers The Compilation Task Input character stream Lexer Token stream Parser Abstract Syntax Tree Analyser Annotated AST Code Generator Code CC&P 2003 1 CC&P 2003

Implementation of Lexical Analysis

Implementation of Lexical Analysis Lecture 4 (Modified by Professor Vijay Ganesh) Tips on Building Large Systems KISS (Keep It Simple, Stupid!) Don t optimize prematurely Design systems that can be tested

ECS 120 Lesson 7 Regular Expressions, Pt. 1

ECS 120 Lesson 7 Regular Expressions, Pt. 1 Oliver Kreylos Friday, April 13th, 2001 1 Outline Thus far, we have been discussing one way to specify a (regular) language: Giving a machine that reads a word

Compilers CS S-01 Compiler Basics & Lexical Analysis

Compilers CS414-2005S-01 Compiler Basics & Lexical Analysis David Galles Department of Computer Science University of San Francisco 01-0: Syllabus Office Hours Course Text Prerequisites Test Dates & Testing

Front End: Lexical Analysis. The Structure of a Compiler

Front End: Lexical Analysis The Structure of a Compiler Constructing a Lexical Analyser By hand: Identify lexemes in input and return tokens Automatically: Lexical-Analyser generator We will learn about

Compiler Construction

Compiler Construction Lecture 2: Lexical Analysis I (Introduction) Thomas Noll Lehrstuhl für Informatik 2 (Software Modeling and Verification) noll@cs.rwth-aachen.de http://moves.rwth-aachen.de/teaching/ss-14/cc14/

Lexical Analysis. Prof. James L. Frankel Harvard University

Lexical Analysis Prof. James L. Frankel Harvard University Version of 5:37 PM 30-Jan-2018 Copyright 2018, 2016, 2015 James L. Frankel. All rights reserved. Regular Expression Notation We will develop a

CS308 Compiler Principles Lexical Analyzer Li Jiang

CS308 Lexical Analyzer Li Jiang Department of Computer Science and Engineering Shanghai Jiao Tong University Content: Outline Basic concepts: pattern, lexeme, and token. Operations on languages, and regular

i About the Tutorial A compiler translates the codes written in one language to some other language without changing the meaning of the program. It is also expected that a compiler should make the target

Formal Languages and Grammars. Chapter 2: Sections 2.1 and 2.2

Formal Languages and Grammars Chapter 2: Sections 2.1 and 2.2 Formal Languages Basis for the design and implementation of programming languages Alphabet: finite set Σ of symbols String: finite sequence

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

COP4020 Programming Languages Syntax Prof. Robert van Engelen Overview n Tokens and regular expressions n Syntax and context-free grammars n Grammar derivations n More about parse trees n Top-down and

Chapter 3: Lexical Analysis

Chapter 3: Lexical Analysis A simple way to build a lexical analyzer is to construct a diagram that illustrates the structure of tokens of the source language, and then to hand translate the diagram into

2. Syntax and Type Analysis

Content of Lecture Syntax and Type Analysis Lecture Compilers Summer Term 2011 Prof. Dr. Arnd Poetzsch-Heffter Software Technology Group TU Kaiserslautern Prof. Dr. Arnd Poetzsch-Heffter Syntax and Type

Cunning Plan. Informal Sketch of Lexical Analysis. Issues in Lexical Analysis. Specifying Lexers

Cunning Plan Informal Sketch of Lexical Analysis LA identifies tokens from input string lexer : (char list) (token list) Issues in Lexical Analysis Lookahead Ambiguity Specifying Lexers Regular Expressions

Syntax and Type Analysis

Syntax and Type Analysis Lecture Compilers Summer Term 2011 Prof. Dr. Arnd Poetzsch-Heffter Software Technology Group TU Kaiserslautern Prof. Dr. Arnd Poetzsch-Heffter Syntax and Type Analysis 1 Content

Non-deterministic Finite Automata (NFA)

Non-deterministic Finite Automata (NFA) CAN have transitions on the same input to different states Can include a ε or λ transition (i.e. move to new state without reading input) Often easier to design

Languages, Automata, Regular Expressions & Scanners. Winter /8/ Hal Perkins & UW CSE B-1

CSE 401 Compilers Languages, Automata, Regular Expressions & Scanners Hal Perkins Winter 2010 1/8/2010 2002-10 Hal Perkins & UW CSE B-1 Agenda Quick review of basic concepts of formal grammars Regular

CS 403 Compiler Construction Lecture 3 Lexical Analysis [Based on Chapter 1, 2, 3 of Aho2]

CS 403 Compiler Construction Lecture 3 Lexical Analysis [Based on Chapter 1, 2, 3 of Aho2] 1 What is Lexical Analysis? First step of a compiler. Reads/scans/identify the characters in the program and groups

COMPILER DESIGN UNIT I LEXICAL ANALYSIS. Translator: It is a program that translates one language to another Language.

UNIT I LEXICAL ANALYSIS Translator: It is a program that translates one language to another Language. Source Code Translator Target Code 1. INTRODUCTION TO LANGUAGE PROCESSING The Language Processing System

Assignment 1 (Lexical Analyzer)

Assignment 1 (Lexical Analyzer) Compiler Construction CS4435 (Spring 2015) University of Lahore Maryam Bashir Assigned: Saturday, March 14, 2015. Due: Monday 23rd March 2015 11:59 PM Lexical analysis Lexical

[Lexical Analysis] Bikash Balami

1 [Lexical Analysis] Compiler Design and Construction (CSc 352) Compiled By Central Department of Computer Science and Information Technology (CDCSIT) Tribhuvan University, Kirtipur Kathmandu, Nepal 2

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

COP4020 Programming Languages Syntax Prof. Robert van Engelen Overview Tokens and regular expressions Syntax and context-free grammars Grammar derivations More about parse trees Top-down and bottom-up

Structure of Programming Languages Lecture 3

Structure of Programming Languages Lecture 3 CSCI 6636 4536 Spring 2017 CSCI 6636 4536 Lecture 3... 1/25 Spring 2017 1 / 25 Outline 1 Finite Languages Deterministic Finite State Machines Lexical Analysis

Compilers CS S-01 Compiler Basics & Lexical Analysis

Compilers CS414-2017S-01 Compiler Basics & Lexical Analysis David Galles Department of Computer Science University of San Francisco 01-0: Syllabus Office Hours Course Text Prerequisites Test Dates & Testing

UNIT II LEXICAL ANALYSIS

UNIT II LEXICAL ANALYSIS 2 Marks 1. What are the issues in lexical analysis? Simpler design Compiler efficiency is improved Compiler portability is enhanced. 2. Define patterns/lexeme/tokens? This set

CSE Lecture 4: Scanning and parsing 28 Jan Nate Nystrom University of Texas at Arlington

CSE 5317 Lecture 4: Scanning and parsing 28 Jan 2010 Nate Nystrom University of Texas at Arlington Administrivia hcp://groups.google.com/group/uta- cse- 3302 I will add you to the group soon TA Derek White

WARNING for Autumn 2004:

CSE 413 Programming Languages Autumn 2003 Max Points 50 Closed book, closed notes, no electronics. Do your own work! WARNING for Autumn 2004 Last year s exam did not cover Scheme and Java, but this year

Theoretical Part. Chapter one:- - What are the Phases of compiler? Answer:

Theoretical Part Chapter one:- - What are the Phases of compiler? Six phases Scanner Parser Semantic Analyzer Source code optimizer Code generator Target Code Optimizer Three auxiliary components Literal

Lexical Analysis/Scanning

Compiler Design 1 Lexical Analysis/Scanning Compiler Design 2 Input and Output The input is a stream of characters (ASCII codes) of the source program. The output is a stream of tokens or symbols corresponding

Regular Languages. MACM 300 Formal Languages and Automata. Formal Languages: Recap. Regular Languages

Regular Languages MACM 3 Formal Languages and Automata Anoop Sarkar http://www.cs.sfu.ca/~anoop The set of regular languages: each element is a regular language Each regular language is an example of a

Writing a Lexical Analyzer in Haskell (part II)

Writing a Lexical Analyzer in Haskell (part II) Today Regular languages and lexicographical analysis part II Some of the slides today are from Dr. Saumya Debray and Dr. Christian Colberg This week PA1:

2068 (I) Attempt all questions.

2068 (I) 1. What do you mean by compiler? How source program analyzed? Explain in brief. 2. Discuss the role of symbol table in compiler design. 3. Convert the regular expression 0 + (1 + 0)* 00 first

Finite automata. We have looked at using Lex to build a scanner on the basis of regular expressions.

Finite automata We have looked at using Lex to build a scanner on the basis of regular expressions. Now we begin to consider the results from automata theory that make Lex possible. Recall: An alphabet

Lexical Analysis - An Introduction. Lecture 4 Spring 2005 Department of Computer Science University of Alabama Joel Jones

Lexical Analysis - An Introduction Lecture 4 Spring 2005 Department of Computer Science University of Alabama Joel Jones Copyright 2003, Keith D. Cooper, Ken Kennedy & Linda Torczon, all rights reserved.

Scanners. Xiaokang Qiu Purdue University. August 24, ECE 468 Adapted from Kulkarni 2012

Scanners Xiaokang Qiu Purdue University ECE 468 Adapted from Kulkarni 2012 August 24, 2016 Scanners Sometimes called lexers Recall: scanners break input stream up into a set of tokens Identifiers, reserved

MIT Specifying Languages with Regular Expressions and Context-Free Grammars. Martin Rinard Massachusetts Institute of Technology

MIT 6.035 Specifying Languages with Regular essions and Context-Free Grammars Martin Rinard Massachusetts Institute of Technology Language Definition Problem How to precisely define language Layered structure

CMSC 350: COMPILER DESIGN

Lecture 11 CMSC 350: COMPILER DESIGN see HW3 LLVMLITE SPECIFICATION Eisenberg CMSC 350: Compilers 2 Discussion: Defining a Language Premise: programming languages are purely formal objects We (as language

MIT Specifying Languages with Regular Expressions and Context-Free Grammars

MIT 6.035 Specifying Languages with Regular essions and Context-Free Grammars Martin Rinard Laboratory for Computer Science Massachusetts Institute of Technology Language Definition Problem How to precisely