CS241 Final Exam Review

Similar documents
Lecture Code Generation for WLM

Sticker. CS 241 Fall 2009 Final Exam. Draft of 6 December 01:43 hrs

Warm Up Problem. Write any equivalent MIPS program corresponding to the following code: int wain ( int a, int b){ int c = a + b; return c*a; }

Class Information ANNOUCEMENTS

CSE P 501 Exam 12/1/11

Defining syntax using CFGs

CS 2210 Sample Midterm. 1. Determine if each of the following claims is true (T) or false (F).

CS143 Midterm Fall 2008

CSE 401 Midterm Exam 11/5/10

ECE251 Midterm practice questions, Fall 2010

CSE 582 Autumn 2002 Exam 11/26/02

Question Marks 1 /16 2 /13 3 /12 4 /15 5 /8 6 /15 7 /8 8 /5 9 /8 Total /100

CS 536 Midterm Exam Spring 2013

Faculty of Electrical Engineering, Mathematics, and Computer Science Delft University of Technology

CSE P 501 Exam 11/17/05 Sample Solution

CSE 582 Autumn 2002 Exam Sample Solution

CSE 401 Midterm Exam Sample Solution 11/4/11

Chapter 2 :: Programming Language Syntax

Downloaded from Page 1. LR Parsing

CSE P 501 Compilers. Parsing & Context-Free Grammars Hal Perkins Winter /15/ Hal Perkins & UW CSE C-1

Properties of Regular Expressions and Finite Automata

Lecture Bottom-Up Parsing

RYERSON POLYTECHNIC UNIVERSITY DEPARTMENT OF MATH, PHYSICS, AND COMPUTER SCIENCE CPS 710 FINAL EXAM FALL 96 INSTRUCTIONS

CSE P 501 Exam Sample Solution 12/1/11

Optimizing Finite Automata

A Simple Syntax-Directed Translator

Faculty of Electrical Engineering, Mathematics, and Computer Science Delft University of Technology

A clarification on terminology: Recognizer: accepts or rejects strings in a language. Parser: recognizes and generates parse trees (imminent topic)

Faculty of Electrical Engineering, Mathematics, and Computer Science Delft University of Technology

Compiler Construction

CS 415 Midterm Exam Spring 2002

Part III : Parsing. From Regular to Context-Free Grammars. Deriving a Parser from a Context-Free Grammar. Scanners and Parsers.

CS 4240: Compilers and Interpreters Project Phase 1: Scanner and Parser Due Date: October 4 th 2015 (11:59 pm) (via T-square)

CS 164 Handout 11. Midterm Examination. There are seven questions on the exam, each worth between 10 and 20 points.

EDAN65: Compilers, Lecture 06 A LR parsing. Görel Hedin Revised:

Parsing. Note by Baris Aktemur: Our slides are adapted from Cooper and Torczon s slides that they prepared for COMP 412 at Rice.

VIVA QUESTIONS WITH ANSWERS

Part 5 Program Analysis Principles and Techniques

CMSC 330: Organization of Programming Languages. Context-Free Grammars Ambiguity

CSE P 501 Exam 8/5/04

CSE 401 Final Exam. December 16, 2010

CSCI Compiler Design

CMSC 330: Organization of Programming Languages. Architecture of Compilers, Interpreters

Run-time Environments. Lecture 13. Prof. Alex Aiken Original Slides (Modified by Prof. Vijay Ganesh) Lecture 13

Where We Are. CMSC 330: Organization of Programming Languages. This Lecture. Programming Languages. Motivation for Grammars

programming languages need to be precise a regular expression is one of the following: tokens are the building blocks of programs

1. The output of lexical analyser is a) A set of RE b) Syntax Tree c) Set of Tokens d) String Character

CS415 Compilers. Syntax Analysis. These slides are based on slides copyrighted by Keith Cooper, Ken Kennedy & Linda Torczon at Rice University

Question Bank. 10CS63:Compiler Design

Parsing Algorithms. Parsing: continued. Top Down Parsing. Predictive Parser. David Notkin Autumn 2008

Chapter 3: Describing Syntax and Semantics. Introduction Formal methods of describing syntax (BNF)

Introduction to Parsing

CS143 Handout 20 Summer 2011 July 15 th, 2011 CS143 Practice Midterm and Solution

EDA180: Compiler Construc6on. Top- down parsing. Görel Hedin Revised: a

1. Explain the input buffer scheme for scanning the source program. How the use of sentinels can improve its performance? Describe in detail.

3. Parsing. Oscar Nierstrasz

Syntax Analysis, III Comp 412

The analysis part breaks up the source program into constituent pieces and creates an intermediate representation of the source program.

Let s look at the CUP specification for CSX-lite. Recall its CFG is

Compilers. Bottom-up Parsing. (original slides by Sam

CMSC 330: Organization of Programming Languages

Compiler Construction

BSCS Fall Mid Term Examination December 2012

CIS 341 Midterm March 2, 2017 SOLUTIONS

CSE-304 Compiler Design

Parsing. source code. while (k<=n) {sum = sum+k; k=k+1;}

CS 315 Programming Languages Syntax. Parser. (Alternatively hand-built) (Alternatively hand-built)

Semantic Analysis. Lecture 9. February 7, 2018

CMSC 330: Organization of Programming Languages. Context Free Grammars

PSD3A Principles of Compiler Design Unit : I-V. PSD3A- Principles of Compiler Design

Syntax and Parsing COMS W4115. Prof. Stephen A. Edwards Fall 2003 Columbia University Department of Computer Science

Parsers. Xiaokang Qiu Purdue University. August 31, 2018 ECE 468

Compiler Construction

CMSC 330: Organization of Programming Languages

2068 (I) Attempt all questions.

CMSC 350: COMPILER DESIGN

CS 314 Principles of Programming Languages

Outline. Parser overview Context-free grammars (CFG s) Derivations Syntax-Directed Translation

CIS 341 Midterm February 28, Name (printed): Pennkey (login id): SOLUTIONS

Revisit the example. Transformed DFA 10/1/16 A B C D E. Start

PART 3 - SYNTAX ANALYSIS. F. Wotawa TU Graz) Compiler Construction Summer term / 309

Wednesday, September 9, 15. Parsers

CMSC 330: Organization of Programming Languages

Parsers. What is a parser. Languages. Agenda. Terminology. Languages. A parser has two jobs:

Dr. D.M. Akbar Hussain

CSE 401 Midterm Exam Sample Solution 2/11/15

CS606- compiler instruction Solved MCQS From Midterm Papers

Parsing II Top-down parsing. Comp 412

Parsing. Roadmap. > Context-free grammars > Derivations and precedence > Top-down parsing > Left-recursion > Look-ahead > Table-driven parsing

CMSC 330: Organization of Programming Languages. Context Free Grammars

First Midterm Exam CS164, Fall 2007 Oct 2, 2007

Parsing III. CS434 Lecture 8 Spring 2005 Department of Computer Science University of Alabama Joel Jones

CSE au Final Exam Sample Solution

CS 164 Programming Languages and Compilers Handout 9. Midterm I Solution

CSCE 531, Spring 2015 Final Exam Answer Key

Chapter 3: CONTEXT-FREE GRAMMARS AND PARSING Part 1

Alternatives for semantic processing

Figure 2.1: Role of Lexical Analyzer

QUESTIONS RELATED TO UNIT I, II And III

CS 314 Principles of Programming Languages. Lecture 9

Transcription:

CS241 Final Exam Review Winter 2018 1 Notes and disclaimer These review materials only cover topics discussed after the midterm. For a review of pre-midterm content, see the midterm review materials. These questions are intended to be a selection of topics which may be tested on the final, but have been written without knowledge of the specific contents of the final exam and thus may emphasize different areas. The number of questions in a category is not representative of the representation of that category on the final exam. 2 Context-free Languages 1. Give an example of a language which is context-free but not regular. 2. What does it mean for a context-free grammar to be ambiguous? Give an example of an ambiguous grammar. 3. Provide both a leftmost and rightmost derivation of aaabbc using the grammar specified by the following production rules: S aabc A aa A ɛ B bb 4. Design a context-free grammar for the language of strings over Σ = {a, b, c of words of the form a n b n+m c m with n, m > 0. 5. Recall that the WLM rule for if statements looks like this (with some words abbreviated): stmt IF LPAREN test RPAREN LBRACE stmts RBRACE ELSE LBRACE stmts RBRACE Add grammar rules to support the following extended if statement syntax: if (condition1) { // statements else if (condition2) { // more statements // [more else ifs] else { // else is still mandatory // statements 1

That is, you should support any number (zero or more) else if s in a WLM if statement. 3 LL Parsing 3.1 Identifying incorrect grammars Argue why each of the following CFGs are not LL(1) by constructing enough of the predictor table to show an issue: 1. 2. 3.2 Parsing correct grammars S S S Sab S xy S S S abc S abd S bsc Construct a predictor table for the following CFG and use it to parse the string deccb. S S (0) S ay (1) S Zb (2) Z XY (3) X de (4) X ɛ (5) Y ccy (6) Y ɛ (7) 4 LR Parsing 4.1 Identifying incorrect grammars Show each of the following grammars are not LR(0) by constructing enough of the automaton to show an issue: 1. S asb (0) S a (1) 2

2. S Xab (0) S Y cd (1) X e (2) Y e (3) 4.2 Parsing correct grammars Construct a SLR(1) machine for the following CFG and use it to parse the string baabbb. S X (0) X XbAb (1) X ɛ (2) A Aa (3) A ɛ (4) 5 Errors in WLM Each of the following WLM programs contains one or more errors. For each error you re given a hint about roughly where the error occurs via an underline. For each program, identify: Which error would be detected first by the compiler (you may assume that if two errors occur in the same phase, the one which appears first in the code occurs first). Which phase of compilation that error occurs in. A brief description of why it is an error. 1. int wain ( int a, int b, int c) { a = a + 1; int d = 3; return b; 2. int wain ( int a) { return a * a; 3. int wain ( int a, int b) { if (a < b) { return a; return a > 3? a : b; 3

4. int wain ( int a, int b) { int a = 0; return a + b; 5. int f( int a) { while ( a < b) a = a + 1; return a; int wain ( int a, int b) { return f(a, b); 6 Compiling code Alice is frustrated with the lack of switch statements in WLM and decides to add them to the language, using the same syntax as C. Which parts of the compiler (scanning, parsing, context-sensitive analysis, and code generation) need to be changed? For each part that needs to be changed, give a high-level overview of what needs to be added, removed, or otherwise changed in that stage. For a reminder of C s switch statement syntax, Alice might want to write the following code: switch (3 + 4) { case 5: foo(); break; case 7: bar(); break; default: break; 7 Code Generation 7.1 Translating WLM programs to MIPS Give MIPS assembly code which are equivalent to the following WLM code snippets. For full marks, your code should work even when the statements, expressions, and mathematical operations within the main statement or expression are replaced by arbitrary other subexpressions, statements, or operations. You may use the following WLM conventions in addition to the usual MIPS conventions: $29 is the frame pointer. $1 and $2 contain the arguments to a procedure. Fproc is the label associated with procedure proc. ST[a] is the offset of a from $29. $3 contains the result of any expression or procedure call. $4 contains 4, and $11 contains 1. You may also reuse the code for previous parts by saying, for example, code(part1) to refer to the code generated by part 1. You may do this even if you didn t complete the previous part. 4

1. 4 2. 4 + 4 3. ((4 + 4)) 4. a 5. a = ((4 + 4)); 6. while (a < 4) { a = ((4 + 4)); 7. f(g(), 4) 7.2 Generating code for new rules Bob is maintaining a WLM compiler and his users are frustrated that they have to type i = i + 1, c = c + d, and other cumbersome expressions: they miss the += syntax from C. As a result, he modifies the scanner so that it can generate a PLUSEQ token for +=, and adds the following rule to the grammar: statement ID PLUSEQ expr SEMI However, he s forgotten MIPS and is stuck on code generation. Write Scala, C++ or Racket code to generate the MIPS assembly code corresponding to this rule and print it to standard output. You may use symboltable(id) to refer to the offset of the variable from $29, and code(expr) to refer to the code corresponding to expr. 8 Memory Management 8.1 Fixed-size memory management Finish the WLM program below which implements a fixed-size memory allocator for triple. You may assume some of the functions (specified below) are implemented already, and that WLM has been extended to support three-parameter functions. You need to implement triple and detriple. // Implement the triple () and detriple () functions below // using the a11p1 library as well as the helpers defined below. int init () { // You may assume this function is defined correctly // to initialize the arena if it i s n t already initialized, // and do nothing otherwise int freelistfirst () { // You may assume this function is defined correctly. // and it returns the address of the first element // in the freelist. The freelist will always be nonempty unless 5

// no memory is available, in which case this function returns 0. // All elements in the freelist point to non - overlapping blocks // of size 12. The word pointed to by an element of the freelist // is the next element of the freelist. int setfreelistfirst ( int pointer ) { // You may assume this function is defined correctly // and it sets the top of the freelist to the p o i n t e r parameter. // // NOTE : This will replace the current freelist in its entirety! int triple ( int a, int b, int c) { // If 12 bytes of memory are available, this function // should store a, b, and c in that memory and return // its address. Otherwise it should return 0. int detriple ( int p) { // This function should make the memory associated // with p available for reuse. As an additional exercise, try implementing the other functions. 8.2 Variable-sized memory management In both cases, the unspecified implementation of malloc should be O(1) for sparse arenas, and free should always be O(1). 1. Suppose a user requests 4 bytes of memory. Give a memory diagram for a block of memory, including the size of each component, that might be returned by malloc. Indicate where the pointer returned by malloc points to within this block with an arrow. 2. Suppose a user frees the block of memory allocated in the previous part. Assuming the block is not merged with any adjacent blocks, give a memory diagram for the free block of memory, including the size of each component. 3. Explain how your block design in the previous parts allows free blocks to be merged with adjacent blocks when necessary in O(1) total time. 8.3 Different fit strategies Suppose we use a much simpler block layout that in the previous question (which may not support efficient frees), where each block has at least eight bytes in it and free blocks simply have their size in bytes followed by a pointer to the next block, while allocated blocks simply have their size in bytes followed by the user s memory. Show that best fit is neither always better or worse than first fit by giving (and explaining via a sequence of arena diagrams), assuming that the head of the freelist is always the most recently deallocated node: 1. A sequence of allocations and deallocations which can be satisfied by best fit but not first fit. 2. A sequence of allocations and deallocations which can be satisfied by first fit but not best fit. Use a 256-byte arena in both cases. 6