CSE Qualifying Exam, Spring February 2, 2008

Similar documents
Spring 2014 CSE Qualifying Exam Core Subjects

Fall 2009 CSE Qualifying Exam Core Subjects. September 19, 2009

Fall 2016 CSE Qualifying Exam CSCE 531, Compilers

Spring 2013 CSE Qualifying Exam Core Subjects. March 23, 2013

Fall 2015 CSE Qualifying Exam Core Subjects

Spring 2011 CSE Qualifying Exam Core Subjects. February 26, 2011

5008: Computer Architecture HW#2

Theory Bridge Exam Example Questions Version of June 6, 2008

Spring 2010 CSE Qualifying Exam Core Subjects. March 6, 2010

Theory of Computations Spring 2016 Practice Final Exam Solutions

CS 125 Section #10 Midterm 2 Review 11/5/14

1. [5 points each] True or False. If the question is currently open, write O or Open.

Final Exam Fall 2008

Theory of Computations Spring 2016 Practice Final

Updated Exercises by Diana Franklin

University of Nevada, Las Vegas Computer Science 456/656 Fall 2016

CS 433 Homework 4. Assigned on 10/17/2017 Due in class on 11/7/ Please write your name and NetID clearly on the first page.

ALGORITHMS EXAMINATION Department of Computer Science New York University December 17, 2007

Computer Architecture CS372 Exam 3

Midterm I - Solution CS164, Spring 2014

CENG 3531 Computer Architecture Spring a. T / F A processor can have different CPIs for different programs.

Final Exam Fall 2007

Q1: Finite State Machine (8 points)

ECE 341 Final Exam Solution

CSE 401 Midterm Exam 11/5/10

registers data 1 registers MEMORY ADDRESS on-chip cache off-chip cache main memory: real address space part of virtual addr. sp.

CMSC Theory of Algorithms Second Midterm

The Turing Machine. Unsolvable Problems. Undecidability. The Church-Turing Thesis (1936) Decision Problem. Decision Problems

TAFL 1 (ECS-403) Unit- V. 5.1 Turing Machine. 5.2 TM as computer of Integer Function

Treewidth and graph minors

Final Exam May 8, 2018

Theory of Computation, Homework 3 Sample Solution

11. a b c d e. 12. a b c d e. 13. a b c d e. 14. a b c d e. 15. a b c d e

CSCE 531 Spring 2009 Final Exam

CpSc 421 Final Solutions

CIS 121 Data Structures and Algorithms Midterm 3 Review Solution Sketches Fall 2018

cs470 - Computer Architecture 1 Spring 2002 Final Exam open books, open notes

Practice Final Exam 1

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

14.1 Encoding for different models of computation

Optimizing Finite Automata

s complement 1-bit Booth s 2-bit Booth s

CS/ECE 374 Fall Homework 1. Due Tuesday, September 6, 2016 at 8pm

ECE 3055: Final Exam

Skyup's Media. PART-B 2) Construct a Mealy machine which is equivalent to the Moore machine given in table.

UML CS Algorithms Qualifying Exam Fall, 2003 ALGORITHMS QUALIFYING EXAM

EECS 470 Midterm Exam Winter 2008 answers

R10 SET a) Construct a DFA that accepts an identifier of a C programming language. b) Differentiate between NFA and DFA?

Chapter 9 Graph Algorithms

Homework 1 Due Tuesday, January 30, 2018 at 8pm

EXAM. CS331 Compiler Design Spring Please read all instructions, including these, carefully

Regular Expressions. Agenda for Today. Grammar for a Tiny Language. Programming Language Specifications

COMP 382: Reasoning about algorithms

CSE P 501 Compilers. Parsing & Context-Free Grammars Hal Perkins Winter /15/ Hal Perkins & UW CSE C-1

Computer Science /21/2000

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph.

Programming Languages Third Edition

Computability and Complexity Sample Exam Questions

CS 230 Practice Final Exam & Actual Take-home Question. Part I: Assembly and Machine Languages (22 pts)

EXAM #1. CS 2410 Graduate Computer Architecture. Spring 2016, MW 11:00 AM 12:15 PM

Problems and Solutions to the January 1994 Core Examination

3Introduction. Memory Hierarchy. Chapter 2. Memory Hierarchy Design. Computer Architecture A Quantitative Approach, Fifth Edition

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

c. What are the machine cycle times (in nanoseconds) of the non-pipelined and the pipelined implementations?

Examples of attributes: values of evaluated subtrees, type information, source file coordinates,

([1-9] 1[0-2]):[0-5][0-9](AM PM)? What does the above match? Matches clock time, may or may not be told if it is AM or PM.

ECE 341. Lecture # 15

CS 2506 Computer Organization II Test 2. Do not start the test until instructed to do so! printed

Computer Science 236 Fall Nov. 11, 2010

Solution for Homework set 3

UNIT I PART A PART B

Chapter 9 Graph Algorithms

Assignment 4 CSE 517: Natural Language Processing

Compiler Construction

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions

ELE 375 / COS 471 Final Exam Fall, 2001 Prof. Martonosi

Chapter 9 Graph Algorithms

COMP260 Spring 2014 Notes: February 4th

COP4020 Programming Languages. Syntax Prof. Robert van Engelen

CS154 Midterm Examination. May 4, 2010, 2:15-3:30PM

Multiple Choice Questions

Sankalchand Patel College of Engineering - Visnagar Department of Computer Engineering and Information Technology. Assignment

CS 373: Combinatorial Algorithms, Spring 1999

CS 2506 Computer Organization II Test 2. Do not start the test until instructed to do so! printed

CS 61C: Great Ideas in Computer Architecture Pipelining and Hazards

CSE P 501 Exam 8/5/04

ECE 505 Computer Architecture

CS 403: Scanning and Parsing

COSC3330 Computer Architecture Lecture 19. Cache

Problem Pts Score Grader Problem Pts Score Grader

Orthogonal range searching. Range Trees. Orthogonal range searching. 1D range searching. CS Spring 2009

CSE 100: B+ TREE, 2-3 TREE, NP- COMPLETNESS

Lecture 4 Feb 21, 2007

CMSC411 Fall 2013 Midterm 1

Greedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d

ELE 375 Final Exam Fall, 2000 Prof. Martonosi

OPEN BOOK, OPEN NOTES. NO COMPUTERS, OR SOLVING PROBLEMS DIRECTLY USING CALCULATORS.

Languages and Compilers

Introduction to Parsing. Lecture 5

EXAM 1 SOLUTIONS. Midterm Exam. ECE 741 Advanced Computer Architecture, Spring Instructor: Onur Mutlu

Transcription:

CSE Qualifying Exam, Spring 2008 February 2, 2008

Architecture 1. You are building a system around a processor with in-order execution that runs at 1.1 GHz and has a CPI of 0.7 excluding memory accesses. The only instructions that read or write data from memory are loads (20% of all instructions) and stores (5% of all instructions). The memory system for this computer is composed of a split L1 cache that imposes no penalty on hits. Both the I-cache and D-cache are direct mapped and hold 32 KB each. The I-cache has a 2% miss rate and 32-byte blocks and the D-cache is write through with a 5% miss rate and 16-byte blocks. There is a write buffer on the D-cache that eliminates stalls for 95% of all writes. The 512 KB write-back, unified L2 cache has 64-byte blocks and an access time of 15 ns. It is connected to the L1 cache by a 128-bit data bus that runs at 266 MHz and can transfer one 128-bit word per bus cycle. Of all memory references sent to the L2 cache in this system, 80% are satisfied without going to main memory. Also, 50% of all blocks replaced are dirty. The 128-bit-wide main memory has an access latency of 60 ns, after which any number of bus words may be transferred at the rate of one per cycle on the 128-bit-wide 133 MHz main memory bus. (a) What is the average memory access time for instruction accesses? (b) What is the average memory access time for data reads? (c) What is the average memory access time for data writes? (d) What is the overall CPI, including memory accesses? 2. A reduced hardware implementation of the classic five-stage RISC pipeline might use the EX stage hardware to perform a branch instruction comparison and then not actually deliver the branch target PC to the IF stage until the clock cycle in which the branch instruction reaches the MEM stage. Control hazard stalls can be reduced by resolving branch instructions in ID, but improving performance in one respect may reduce performance in other circumstances. How does determining branch outcome in the ID stage have the potential to negatively affect performance? 3. Three enhancements with the following speedups are proposed for a new architecture: Speedup1 = 30 Speedup2 = 20 Speedup3 = 15 Only one enhancement is usable at a time. (a) If enhancements 1 and 2 are each usable for 25% of the time, what fraction of the time must enhancement 3 be used to achieve an overall speedup of 10? (b) Assume the enhancements can be used 25%, 35%, and 10% of the time for enhancements 1, 2, and 3, respectively. For what fraction of the reduced execution time is no enhancement in use? 1

Theory 1. Fix a finite alphabet Σ. For any language L Σ define rotate(l) = {wa a Σ and w Σ and aw L}. Show that if L is regular, then rotate(l) is regular. [Hint: First show that if L is regular, then for any fixed a Σ, the language is regular.] L a := {w w Σ and aw L} 2. We use the standard model of a Turing machine with a one-way infinite tape whose cells are labeled 0, 1, 2,... starting with the leftmost cell. For t 0, we say that TMs M 1 and M 2 are separate at time t iff after exactly t steps since starting the two machines are scanning cells with different labels, i.e., after exactly t steps M 1 is scanning some cell i, M 2 is scanning some cell j, and i j. Say that M 1 and M 2 eventually separate iff there is a time t at which M 1 and M 2 are separate. Let L = { M 1, M 2 M 1 and M 2 are TMs that eventually separate on input ε}. (a) (40%) Explain why L is Turing-recognizable. (b) (60%) Show that L is undecidable. 3. Show that the language is in P. L = { M, 1 n M is a DFA, n 0, and M accepts at least one string of length n} 2

Algorithms 1. Using the tree method, give tight asymptotic bounds for the function T (n) that satisfies the recurrence T (n) = 2T (n/2) + n lg n. Show your work. 2. In a certain kind of augmented binary search tree, each node v has four attributes: key[v]: the actual value at the node v (for simplicity, assume that this is just a number), left[v]: the pointer to v s left subtree, right[v]: the pointer to v s right subtree, and l-size[v]: the number of nodes in v s left subtree. Keys are required to be distinct. Do all of the following: (a) (20% credit) Write pseudocode for a function size(r) that returns the number of nodes in BST r (passed by reference). Your algorithm should run in time O(d) where d is the depth of the tree. (b) (30% credit) Write pseudocode for a function select(r, i) that returns the ith smallest key in BST r. Your function should assume as a precondition that 1 i s, where s is the number of nodes in the tree. (c) (50% credit) Write pseudocode for a function insert(r, x) that attempts to insert a key value x into BST r at the standard place. The insertion fails iff x is already in the tree. Your function should return a Boolean value, with true indicating that the insertion was successful. Your function should also update l-size attributes of nodes as necessary. All three functions should run in time O(d), where d is the depth of the tree. The tree may be empty, in which case r is a null reference. You may use recursion or not as you wish. 3. Given an undirected graph G = (V, E), a vertex cover for G is any set C V of vertices such that every edge in E has at least one endpoint in C. A vertex half-cover for G is any set C V of vertices such that at least half ( E /2) of the edges in E have at least one endpoint in C. Recall that the following problem is NP-complete: VERTEX COVER (VC) Instance: An undirected graph G = (V, E) and a nonnegative integer k. Question: Is there a vertex cover for G of size at most k? Consider the following decision problem: VERTEX HALF-COVER (VHC) Instance: An undirected graph G = (V, E) and a nonnegative integer k. Question: Is there a vertex half-cover for G of size at most k? Show that VHC is NP-hard by giving an explicit polynomial reduction from VC to VHC. You are not required to prove that your reduction actually works, but if it doesn t, some explanation may be worth partial credit. (VHC is clearly in NP, so you are actually showing that VHC is NP-complete.) 3

Compilers 1. A simple generalized list is defined recursively to be a pair of matching parentheses surrounding a comma-separated sequence of zero or more simple generalized lists. For example, () (()) ((), (()), ()) (((), ()), ()) are all simple generalized lists. Write a grammar for simple generalized lists. Your grammar should be unambiguous and suitable for LR (bottom-up) parsing. [Hint: two nonterminals and three productions suffice.] 2. Consider the following standard LR (bottom-up) grammar for arithmetic expressions with constants, addition, and multiplication (S is the start symbol): S ::= E E ::= E 1 + T E ::= T T ::= T 1 F T ::= F F ::= c F ::= (E) (Subscripts have been added to distinguish different occurrences of the same nonterminal in the same production.) Do either but not both of the two items below: (a) (For 80% credit) Add semantic rules to the grammar above that, given an input expression, produce an equivalent expression with the minimum number of parentheses. So the rules in effect remove unnecessary parentheses. Your resulting expression should be passed as a string attribute to S.output. Assume that the terminal c has a text attribute that contains the string representing the constant. You may use + in your actions to denote string concatenation, and please surround string constants with double quotes. You should assume that + and are associative operators, and that the usual precedence rules apply ( before +). Do not rearrange or alter the expression in any way other than the parentheses. (b) (For 100% credit) Do the same as the last item, but with the grammar also having a production for subtraction: E ::= E 1 T Subtraction has the same precedence as addition, and both operators associate from left to right. Subtraction is not associative. Examples: 4

Input Output Comment 2 + (3 + 4) 2 + 3 + 4 addition is associative (2 3) 4 2 3 4 multiplication is associative 2 + (3 4) 2 + 3 4 multiplication presides over addition (2 + 3) 4 (2 + 3) 4 parentheses needed (2 3) + 4 2 3 + 4 operators associate left to right anyway 2 (3 + 4) 2 (3 + 4) parentheses needed 3. The following fragment of 3-address code was produced by a nonoptimizing compiler: 1 start: i = 1 2 loop1: if i > n goto part2 3 j = 1 4 sum = 0 5 loop2: if j > i goto fin2 6 o = i * 8 7 o = o + j 8 s = a[o] 9 t = j * 8 10 v = a[t] 11 y = s * v 12 if s < n goto fin1 13 sum = sum + y 14 j = j + 1 15 goto loop1 16 fin1: j = sum 17 goto fin2 18 fin2: i = i + 1 19 o = i * 8 20 a[o] = sum 21 goto loop2 22 part2: no-op Assume that there are no entry points into the code from outside other than at start. (a) (20% credit) Decompose the code into basic blocks B 1, B 2,..., giving a range of line numbers for each. (b) (30% credit) Draw the control flow graph, describe any unreachable code, and coalesce any nodes if possible. (c) (30% credit) Is the variable o live just before line 9? Is the variable y live just before line 14? Explain. Assume that n and sum are the only live variables immediately after line 22. (d) (20% credit) Describe any simplifying tranformations that can be performed on the code (i.e., transformations that preserve the semantics but reduce (i) the complexity of an instruction, (ii) the number of instructions, (iii) the number of branches, or (iv) the number of variables). 5