More Simulations. CS154 Chris Pollett Apr 25, 2007.

Similar documents
We can create PDAs with multiple stacks. At each step we look at the current state, the current input symbol, and the top of each stack.

Closure Properties of CFLs; Introducing TMs. CS154 Chris Pollett Apr 9, 2007.

CSCI 3130: Formal Languages and Automata Theory Lecture 16 The Chinese University of Hong Kong, Fall 2010

CS21 Decidability and Tractability

Source of Slides: Introduction to Automata Theory, Languages, and Computation By John E. Hopcroft, Rajeev Motwani and Jeffrey D.

CS 125 Section #4 RAMs and TMs 9/27/16

Turing Machines. A transducer is a finite state machine (FST) whose output is a string and not just accept or reject.

Theory of Languages and Automata

arxiv: v1 [cs.cc] 9 Jan 2015

Theory of Computation, Homework 3 Sample Solution

Introduction to Programming in C Department of Computer Science and Engineering. Lecture No. #19. Loops: Continue Statement Example

CSE 105 THEORY OF COMPUTATION

Name: CS 341 Practice Final Exam. 1 a 20 b 20 c 20 d 20 e 20 f 20 g Total 207

Material from Recitation 1

Turing Machines, continued

Variants of Turing Machines

Computability and Complexity

Lecture 7: Primitive Recursion is Turing Computable. Michael Beeson

Automata Theory CS F-14 Counter Machines & Recursive Functions

University of Waterloo CS240R Fall 2017 Review Problems

Recursively Enumerable Languages, Turing Machines, and Decidability

Turing Machine as Transducer. Turing Machines. Computation with Turing Machines. Computation with Turing Machines. Computation with Turing Machines

Log-Space. A log-space Turing Machine is comprised of two tapes: the input tape of size n which is cannot be written on, and the work tape of size.

Limitations of Algorithmic Solvability In this Chapter we investigate the power of algorithms to solve problems Some can be solved algorithmically and


CSE 105 THEORY OF COMPUTATION

(a) R=01[((10)*+111)*+0]*1 (b) ((01+10)*00)*. [8+8] 4. (a) Find the left most and right most derivations for the word abba in the grammar

Theory of Computations Spring 2016 Practice Final Exam Solutions

CpSc 421 Final Solutions

Introduction to MiniSim A Simple von Neumann Machine

The Limits of Sorting Divide-and-Conquer Comparison Sorts II

05. Turing Machines and Spacetime. I. Turing Machines and Classical Computability.

The Satisfiability Problem [HMU06,Chp.10b] Satisfiability (SAT) Problem Cook s Theorem: An NP-Complete Problem Restricted SAT: CSAT, k-sat, 3SAT

2.8 Universal Turing Machines and the Halting Problem

1. Draw the state graphs for the finite automata which accept sets of strings composed of zeros and ones which:

TAFL 1 (ECS-403) Unit- V. 5.1 Turing Machine. 5.2 TM as computer of Integer Function

Finite Automata Theory and Formal Languages TMV027/DIT321 LP4 2016

More NP-complete Problems. CS255 Chris Pollett May 3, 2006.

Introduction to Computers & Programming

Enumerations and Turing Machines

More on Polynomial Time and Space

Non-Deterministic Space

LaTeX and Turing Machines. CS254 Chris Pollett Aug. 30, 2006.

Yet More CFLs; Turing Machines. CS154 Chris Pollett Mar 8, 2006.

Department of Computer Applications. MCA 312: Design and Analysis of Algorithms. [Part I : Medium Answer Type Questions] UNIT I

Little Man Computer (LMC)

CISC 4090 Theory of Computation

The input is a sequence of throws showing the number of pins knocked down on that throw.

Multiple Choice Questions

0.1 Welcome. 0.2 Insertion sort. Jessica Su (some portions copied from CLRS)

Equivalence of NTMs and TMs

Skyup's Media. PART-B 2) Construct a Mealy machine which is equivalent to the Moore machine given in table.

Theory of Computations Spring 2016 Practice Final

Does this program exist? Limits of computability. Does this program exist? A helper function. For example, if the string program is

Actually talking about Turing machines this time

NFAs and Myhill-Nerode. CS154 Chris Pollett Feb. 22, 2006.

On the undecidability of the tiling problem. Jarkko Kari. Mathematics Department, University of Turku, Finland

The Chinese remainder theorem

6.045J/18.400J: Automata, Computability and Complexity. Practice Quiz 2

DESIGN AND ANALYSIS OF ALGORITHMS. Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

p x i 1 i n x, y, z = 2 x 3 y 5 z

We show that the composite function h, h(x) = g(f(x)) is a reduction h: A m C.

Introduction to Computer Science. Homework 1

CS154. Streaming Algorithms and Communication Complexity

I have read and understand all of the instructions below, and I will obey the Academic Honor Code.

CSE 105 THEORY OF COMPUTATION

The Turing Machine. Unsolvable Problems. Undecidability. The Church-Turing Thesis (1936) Decision Problem. Decision Problems

Unit 1 Chapter 4 ITERATIVE ALGORITHM DESIGN ISSUES

The von Neumann Architecture. IT 3123 Hardware and Software Concepts. The Instruction Cycle. Registers. LMC Executes a Store.

Theory of Computer Science

UNIT I PART A PART B

Theory of Computation p.1/?? Theory of Computation p.2/?? A Turing machine can do everything that any computing

6. Advanced Topics in Computability

CS 261 Data Structures. Big-Oh Analysis: A Review

1 Probability Review. CS 124 Section #8 Hashing, Skip Lists 3/20/17. Expectation (weighted average): the expectation of a random quantity X is:

Introduction to the Analysis of Algorithms. Algorithm

6. Asymptotics: The Big-O and Other Notations

ECS 120 Lesson 16 Turing Machines, Pt. 2

The LC3's micro-coded controller ("useq") is nothing more than a finite-state machine (FSM). It has these inputs:

7.3 Divide-and-Conquer Algorithm and Recurrence Relations

Chapter 14: Pushdown Automata

More complicated than addition. Let's look at 3 versions based on grade school algorithm (multiplicand) More time and more area

Context Free Grammars. CS154 Chris Pollett Mar 1, 2006.

(a) Give inductive definitions of the relations M N and M N of single-step and many-step β-reduction between λ-terms M and N. (You may assume the

R10 SET a) Construct a DFA that accepts an identifier of a C programming language. b) Differentiate between NFA and DFA?

Computer Sciences Department

Turing Machine Languages

Announcements. Lab Friday, 1-2:30 and 3-4:30 in Boot your laptop and start Forte, if you brought your laptop

How convincing is our Halting Problem proof? Lecture 36: Modeling Computing. Solutions. DrScheme. What is a model? Modeling Computation

CS 275 Automata and Formal Language Theory. First Problem of URMs. (a) Definition of the Turing Machine. III.3 (a) Definition of the Turing Machine

CS4311 Design and Analysis of Algorithms. Lecture 1: Getting Started

Storage capacity of labeled graphs

COMP 382: Reasoning about algorithms

Phil 320 Chapter 1: Sets, Functions and Enumerability I. Sets Informally: a set is a collection of objects. The objects are called members or

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

CS 350 Final Algorithms and Complexity. It is recommended that you read through the exam before you begin. Answer all questions in the space provided.

COS 126 Written Exam 2 (Spring 2015)

Computer Architecture Programming the Basic Computer

HASH TABLES. Hash Tables Page 1

Complexity, Induction, and Recurrence Relations. CSE 373 Help Session 4/7/2016

Transcription:

More Simulations CS154 Chris Pollett Apr 25, 2007.

Outline Multi-tape Turing Machines RAMS

k-tape Turing Machine One way you might try to improve the power of a TM is to allow multiple tapes. Definition A k-tape TM, where k 1 is an integer, is a six-tuple M=(K,,,s, _,F) where K,,s are as in the 1-tape case. Now, however, the transition functions is a map : K x k -->Kx k x{l,r} k Basically the heads on each tape can move independently of each other. For example, with a two tape machine an algorithm for palindrome testing is easy. We set up the transition function so it first copies the first tape input to the second tape. Then it rewinds the first tape and leaves the second tape at the end of the input. Then the first tape moves right while the second tape moves left and we compare the two tape symbol by symbol. If they don t match we halt and reject. If the second tape gets back to the _ then we accept.

TIME and SPACE classes. We shall use the k-tape model of TM as our basic model to study time and space complexity. Let f:n --> N. We say that machine M operates within time f(n) if for any input string x, the time require by M on input x is at most f( x ). Here x is the length of x as a string. We can make a similar definition for space. Defn. We say that a language L is in TIME(f(n)) (resp. SPACE(f(n))) if it is decided by some k-tape TM in time f(n) (resp. space f(n)). For example, the algorithm for palindrome in time TIME(3(n+2)). You can show for a single tape machine for palindrome you need at least time Ω(n 2 ). How well can a 1-tape machine simulate a k-tape machine?

Simulating k-tape by 1-tape Thm. Given any k-tape machine M that operates within time f(n), we can construct a 1-tape machine M operating within time O((f(n)) 2 ). Proof. Let M=(K,,,s, _, F) be a k tape machine. The idea is M alphabet,, is going to be expanded to include symbol # to denote the last used square of a tape. And we are going to add to a symbol b for each symbol b in. A configuration of M can now be written as: (q, #w 1 a 1 v 1 # w 2 a 2 v 2 # # w k a k v k # ) So except for the state which we can keep track of in K the rest of the state is a string over. We will use new states K to keep track of the state of M during a simulation step. To simulate M, we first convert the input into the initial configuration of M viewed as a string. Then to simulate a step we scan left to right the current configuration string, noting what symbol is being read by each tape in our finite control. Next we rewind the tape and we then do passes again to update each tapes configuration. In the worst case we need to expand the number of tape square of each tape by 1. So we could need (k(f( x )+1)+1, passes to simulate 1 step. So simulating f( x ) steps take at most f( x )((k(f( x )+1)+1) times which is O((f(n)) 2 ).

Random Access Machines (RAMs) Consist of a program which acts on an arrays of registers. Each register is capable of storing an arbitrarily large integers (either positive or negative). Register 0 is called the accumulator. It is where any computations will be done. A RAM program consists of a finite sequence of instructions P=(p 1, p 2, p n ). The machine has a program counter which says what instruction is to be executed next. This initially starts at the first instruction. An input w=w 1,,w n is initially placed symbol-wise into registers 1 through n. (We assume some encoding of the alphabet into the integers). All other registers are 0. A step of the machine consists of looking at the instruction pointed to by the program counter, executing that instruction, then adding 1 to the program counter if the instruction does not modify the program counter and is not the halt instruction. Upon halting, the output of the computation is the contents of the accumulator. For languages, we say w is in the language of the RAM, if the accumulator is positive, and is not in the language otherwise.

Allowable Instructions Each instruction is from the list: 1. Read j /* read into register j into accumulator */ 2. Read (j) /* look up value v of register j then read register v into the accumulator*/ 3. Store j /* store accumulator s value into register j */ 4. Store (j) /* look up value v of register j then store acumulators values into register v*/ 5. Load x /* set the accumulator s value to x */ 6. Add j /* add the value of register j to the accumulator s value */ 7. Sub j /* subtract the value of register j from the acummulator s value */ 8. Half /* divide accumulator s value by 2 round down (aka shift left)*/ 9. Jump j /* set the program counter to be the jth instruction */ 10. JPos j /* if the accumulator is positive, then set the program counter to be j */ 11. JZero j /* if the accumulator is zero, then set the program counter to be j */ 12. JNeg j /* if the accumulator is negative, then set the program counter to be j */ 13. HALT /* stop execution */

Example Program for Multiplication Suppose we input into register 1 and 2 two number i 1 and i 2 we would like to multiply these two numbers: 1. Read 1 //(Register 1 contains i 1 ; during the kth iteration 2. Store 5 // Register 5 contains i 1 2 k. At the start k=0) 3. Read 2 4. Store 2 //(Register 2 contains i 2 /2k just before we increment k ) 5. Half //(k is incremented, and the k iteration begins) 6. Store 3 // (Register 3 contains half register 2 ) 7. Add 3 //(so now accumulator is twice register 3) 8. Sub 2 //(accumulator will be 0 if low order bit of what was stored in register 2 is 0) 9. JZero 13 10. Read 4 //( the effect is we add register 5 to register 4 11. Add 5 // only if the kth least significant bit of i 2 is 0) 12. Store 4 //(Register 4 contains i 1 *(i 2 mod 2 k )) 13. Read 5 14. Add 5 15. Store 5 //(see comment of instruction 3) 16. Read 3 17. JZero 19 18. Jump 4 //(if not, we repeat) 19. Read 4 //(the result) 20. Halt

Runtime of a RAM Let D be a set of finite sequences of integers. A program P computes a function f from D to the integers if for all I in D, the programp halts with f(i) in its accumulator Let len(i) be the binary length of integer i. The length of the input is defined as len(i) = j len(i j ). We say P runs in time g(n) if for all I, such that len(i) = n, it runs in at most g(len(i)) steps.

Simulation Set-up Theorem If L is in TIME(f(n)), then there is a RAM which computes it in times O(f(n)). Proof: Let M be the TM recognizing L. We assume one black space is added to any input and the tape alphabet has been encoded as integers. The RAM first moves the input to Registers 4 through n+3. This is actually a little tricky to do. First, the RAM reads registers 1 and 2. Since the tape alphabet of M is finite, the RAM can remember these values without having to write them to some other register by branching to different subroutines to execute. As these values are now remembered, register 1 can be used as a counter to count up. By doing Read (1) instructions and incrementing register 1, until the encoding of the _ for the end of the input is found we scan to the end of the input. We look one register before this, read it, and store it into register 2. Adding 3 to register 1, we can then load the accumulator with register 2 s values and do a Store (1). This moves the nth symbol to register n+3. By subtracting 4 from register 1 and doing a read (1) we can get the next symbols to move and so on. We are now almost ready to begin the simulation.

More Proof Register 1 is used to hold the current tape square number being read by the TM in the simulation and this is initially set to 4, the first square of the input. Register 3 holds a special start of tape symbol. The program now tries to simulate steps of the Turing machine. The program has a sequence of instructions simulating each state q of M and has a subsequence of these instruction, N(q,j), for handling the transition for state q while reading the jth type of alphabet symbol.

N(q, j) Suppose δ(q, j) = (p, k, D). Here D is a direction. To simulate this we do: N(q,j) Load j N(q,j)+1 Store 2 N(q,j)+2 Read (1) N(q,j)+3 Sub 2 //(if the tape position we are reading has value j this will be 0) N(q,j)+4 JZero N(q, j) + 6 N(q,j)+5 Jump N(q, j+1) //(if we are not reading a j check if we are reading a j+1 ) N(q,j)+6 Load k N(q,j)+7 Store (1) N(q,j)+8 Load -1, 0, 1 //(depending on which direction the move was) N(q,j)+9 Add 1 N(q,j)+10 Store 1 N(q,j)+11 Jump N(p, k)