Halting Stack Automata

Size: px
Start display at page:

Download "Halting Stack Automata"

Transcription

1 Halting Stack Automata J. D. ULLMAN* Bell Telephone Laboratories, Inc., Murray Hill, New Jersey AIISTRACT. It is shown that every two-way (deterministic) stack automaton language is accepted by a two-way (deterministic) stack automaton which for each input has a bound on the length of a valid computation. As a consequence, two-way deterministic stack languages are closed under complementation. KEY WORDS AND PHRASES: stack automaton, halting problem, recursiveness, complementation, transition matrix, closure properties, nondeterministic stack automaton CR CATEGORIES: Introduction The stack automaton is an abstract machine (represented in Figure 1) consisting of a two-way input tape with endmarkers, a finite-state control, and a pushdown list which can be entered in a read-only mode. The pushdown list, or stack, is a semiinfinite tape which will always hold a finite length string of nonblank symbols to the left of an infinite sequence of blanks. The cell of the leftmost blank is called the top of the stack. A move of the automaton depends upon the state of the finite control, the input symbol scanned, and the stack symbol scanned. In one move the stack automaton can change state and move its input head one cell left or right or leave it stationary. In addition, if the stack head is at the top of the stack (leftmost blank), it has a choice of (1) leaving its stack head fixed; (2) printing a nonblank symbol at the top of the stack, and then moving its stack head right one cell to the new top of the stack; (3) moving left; or (4) moving left, and then printing a blank (i.e. erasing) over the rightmost nonblank. If the stack head is not at the top of the stack it can only move one cell left or right or not move. If the stack automaton has at most one choice of move in any situation, it is deterministic. Otherwise, it is nondeterministic. A stack automaton is nonerasing if it never prints a blank over a nonblank symbol (choice (4) above). It is one-way if its input head never moves left. The stack automaton was first defined in [1]. The nonerasing variety was considered in [1] and [2], the one-way variety in [3-5]. In [1] it was shown that stack automata are recursive. We prove something stronger here. Not only can an external observer tell whether a given stack automaton is performing a useless computation (either it will never accept or there is a * Present address: Department of Electrical Engineering, Princeton University, Princeton, N.J. Journal of the Association for Computing Machinery, Vol. 16, No. 4, October 1969, pp

2 Halting Stack Automata 551 input- $~ % ~0 i~--~stac~ Fie. 1. CONTROL STACK ~. Stack atttomaton shorter computation leading to acceptance), as shown in the proof of recursiveness [1], but the stack automaton can itself "tell" this. More formally, we show, for type X = two-way and either deterministic or nondeterministic, the statement: (.) If a language L is accepted by a type X stack automaton, then L is accepted by a type X stack automaton which has no infiaite computations with fixed input. We comment that for the four cases in which type X is deterministic (nonerasing) one-way or nondeterministic (nonerasing) one-way, (,) follows directly from constructions given in [3]. Also, if type X is two-way deterministic nonerasing or twoway nondeterministic nonerasing, (,) follows from the constructions of [2] which show equivalence of these types to certain tape complexity classes of Turing machines. It is often hard to prove a result analogous to (*) for a given class of automata. For example, it is not known whether the analogous result is true for two-way pushdown automata [6] or two-way Turing machines of tape complexity L(n) if L(n) is functionally less than log n [7]. There are several uses for the results of this paper. For one, it immediately follows that the class of languages accepted by deterministic stack automata is closed under complementation. Many other closure properties of stack automata can be shown using these results as a lemma [8]. 2. Definitions Formally, a two-way nondeterministc stack automaton (2NSA) is a 10-tuple = (K, "Z, r, ~, ~, q0, Z0,, $, F), where: (1) K, ~, and are finite sets of states, input symbols, and nonblank stack symbols, respectively. does not include the special symbols -1, 0, and E. (2) Z0, in F, is the bottom-of-stack marker. (3) and $, in Z, are left and right endmarkers, respectively. (4) q0, in K, is the start state. (5) F, a subset of K, is the set of final states. (6) ~ maps K X ~ X into the subsets ofk X {-1, 0, +1} X {-1, 0, +1}. (7) ~a maps K X ~ X into the subsets of K X {-1, 0, +1} X ( - {Z0} U {-1, 0, E}). Journal of the Association for Computing Machinery, Vol. 16, No. 4, October 1969

3 552 J.D. ULLMAN If for no q in K, a in ~, and Z in r does ~(q, a, Z) or ~b(q, a, Z) contain more than one clement, then S is a two-way deterministic stack automaton (2DSA). The mapping ~ specifies the moves of S when the stack head is below the top of the stack. If ~(q, a, Z) contains (p, dl, d2), then if S is in state q, scanning a on its input and Z on its stack, it may enter state p and move its input and stack heads dl and d2 symbols right, respectively (Thus - 1 indicates a move left ) The mapping ~b specifies the moves of S when the stack head is at the top of the stack. In this situation, the rightmost stack symbol may affect the move. If ~b(q, a, Z) contains (p, d, X), then when S is in state q, scanning a on its input, the stack head of S is at the top of the stack and Z is the rightmost stack symbol, S may enter state p, move its input head d symbols right, and (1) if X = -1 or 0, move its stack head X symbols right; (2) if X = E, move the stack head left and erase the rightmost stack symbol; (3) if X is in r, print X and move the stack head right. The symbol Z0 marks the bottom of the stack and does not appear elsewhere. The symbols ~ and $ mark the ends of the input and likewise do not appear elsewhere. A configuration of a 2NSA S = (K, ~, F, ~, ~b, qo, Zo, ~, $, F) is a 5-tuple (q, w, y, i,j), where q is in K, w is in (2~ - {, $})*$, y is in r*, and i andj are integers, 1< i < I w I and 0 < j _< I Y [. (We use I x I for the length of string x. Also, A* is the set of strings of symbols in alphabet A, including e, the string of length 0.) The string w is the input tape content, y is the stack, and i is the input head position. If w = ala2.. a~, each a~ in :~, then the input head is scanning a~. The integer j indicates the stack head position. If j -- 0, that head is at the top of the stack. If j > 0 and y = ZoZ~... Z,~, then the stack head is scanning Zm_~+~. The relation ]g, or ~- where S is understood, relates two configurations if the first can be obtained from the second by a single move of S. Formally, consider a configuration (q, al.. a,, Zi.. Z~, i,j), with each ak in 2~ and Zk in r. Ifj > 0, ~(q, a~, Zm_~+l) contains (p, dl, d~), and the inequalities 1 < i "-b dl _~ n and 0 <j- d~< mhold, then (q, a~... a~, Z~... Z~, i, j) [~- (p, a~... a~, Z,... Z~, i -t- d~, j - d2). Ifj = 0, and Sb(q, a~, Zm) contains (1) (p, d, 0), then (q,a~...a~, Z~... Z,~,i, 0) [~- (p, a~... a~, Zl... Z~,i--b d, 0); (2) (p, d, -1), then (q, al... a~,z1... Z,~, i,o) 1~- (p, al "" a,,zl " Z~, i'b d, 1); (3) (p, d, E), then (q, al... a,, Z1... Z,~, i, O) [~ (p, al ".. a,, Z1... Z,~_i, i + d, 0); (4) (p, d, X), where X is in r - {Z0}, then (q, al." a~, Z1... Zm, i, O) ]~ (p, al.." a~, Z1... ZmX, i -b d, 0). Define the relatmn I~-, or I -?- where S is understood, by Q [~ Q for any configuration Q, and Q, [~- Q2 if there is a configuration Q3 such that Q~ [~ Q3 and Q3 ~- Q~. For a given 2NSA S, as above, we say a sequence of configurations Q~, Q~,.-., Q~ is a w-computation if Q~ = (qo, w, Z0,1, 0) and for 1 < k _< n, Q, is of the form (q,, w, y,, i~, j,), and Q~-I ~- Q,. The length of this w-computation is n. Journal of the Association for Computing Machinery, Vol. 16, No. 4, October 1969

4 Hatting Stack Automata 553 We say a sequence of configurations Q~, Q~, "", Q~ is a stack scan if for some wandallk, Qk = (qk,w, yk,ik,fl), Qk-~ ~- Qkfor 1 < k < n,j~ = 1;j~ = 0 and_ for no jk, 1 < k < n, is jk = 0. Informally, a stack scan is a sequence of configurations whereby S raises its stack head from the rightmost stack symbol to the top of the stack without reaching the top of the stack in an intermediate configuration. S may make a long excursion into the stack in so doing. If Q1, Q2, "", Q~ is a stack scan, we write Q1 I~ ~ Q~, or Q1 I ~ Q~ when S is un- derstood. For a givcn S = (K, Z, F, 6, ~b, qo, Z0,, $, F), define the language accepted by final state, T(S), to be the set of w in (~ - {~, $})* such that for some q in F and some y, i, and j, (qo, w$, Zo, 1, O) I*- (q, ~w$, y, i,j). The language accepted by empty stack, N(S), is the set of w in (~ - {, $} )* such that for some q and i, (qo, ~w$, Z0,1, 0) I*- (q, ~w$, e, i, 0). It is elementary to show that a language L is T(S~) for some 2NSA (2DSA) S~ if and only if L is N(S2) for some 2NSA (2DSA) $2 (see [9], for example). For convenience, we use acceptance by empty stack exclusively. We say S is halting if for every input w there is a constant kw such that there are no w-computations of length greater than k~. 3. Transition Matrices The transition matrix is a concept which is useful in describing the operation of stack automata. It was introduced in [2] but we give a definition here. Let S= (K,~,r, 8,~b,qo,Zo,,$,F) be a 2NSA. Let w be in (Z - {, $})*$, I w I = nn, and let M = [mlj] be an n n matrix whose elements, denoted m~j, 1 ~ i, j ~ n, are subsets of K X K. We say the transition matrix M describes y in FF* if for all i and j, mij contains (p, q) if and only if (p, w, y, i, I) I ~ (q, w, y, j, 0). Thus M tells exactly which changes of state and input head position S can make while performing a stack scan. The transition matrix is a compact method of describing the action of S on stacks of arbitrary length. Let S be a 2DSA, and w as above. Let ~ be a function from K X {1, 2,..., n} to (K X { 1, 2,., n} ) [J {~/. We say the transition function t~ describes y in F* if for all p ink and 1 < i g n, ~(p, i) = (q,j) if (p, w, y, i, 1) I~(q, w, y,j, 0), and t~(p, i) = ~ if there exist no q andj such that (p, w, y, i, 1) I ~ (q, w, y, j, 0). Note that since S is a 2DSA, the transition function describing y is unique. That is, (p,w,y,i, 1) I ~: (q~,w,y,j~,o) and (p,w,y,i, 1) ]~ (q~, w, y, j2, 0) imply ql = q~ and jl = j2. 4. A Necessary and Su~icient Condition for Halting As a preliminary, let us state and prove a necessary and sufficient condition for a stack automaton to be halting. In what follows it will be shown that every stack automaton can be modified to satisfy the condition. LEMMA 1. A stack automaton S = (K, ~, F, ~, ~b, qo, Zo, ~, $, F) is halting if and only if for each input w, there are constants kl, ks, and k3 such that if V = Q1, Journal of the Association for Computing Machinery, ol. 16, No. 4, October 1969

5 554 J.D. VL~AN Q~,..., Q,~ is a w-computation, where for 1 < m < n, Q,~ = (q~, w, ym, i~, j,~), then (1) for all m, [ ym I -< kl ; (2) for all m, 1 _< m < n - k2 + 1, at least one oj'j,~, j,~+l, "", jm k~-1 is zero; (3) if there are integers ml < ms < "" < mk3 such that YEi = Ym2... Ymk~ = Y and jm~ = jm2... jmk~ = 0, then for some m, ml < m < ink3, we have lyre I < l Y I. PROOF. Intuitively, the three conditions are: S must not grow its stack indefinitely, get lost in the stack, or repeat a stack too often without erasing a symbol of that stack. It is trivial to see that if there is a bound lc on the length of a w-computation, then letting kl = k2 = k3 = k + 1 causes conditions (1), (2), and (3) to be satisfied. To prove the lemma, we assume that for some input w, conditions (1), (2), and (3) are satisfied. We then show that S has no w-computations of length k~(2k3) k'+~ or greater. Assume the contrary--v = Q0, Q~,, Q~ is a w-computation with k1 l n = k2(2k3). Form a new list of configurations VV ~ by deleting from V all configurations in which the stack head is not at the top of the stack. (Note that S cannot necessarily go from one configuration of W to the next in one move.) By condition (2), there are at least (2k3) ~+~ configurations in sequence W. By condition (1), no stack of a configuration in sequence W is of length greater than k~. Let U~, 0 < i < k~, be the list of configurations in W the length of whose stack is exactly i. Letj~ be the length of U~. Then kl ~j~ >_ (2k~) ~, ~. i~0 Now j0 < 1, since at most one configuration in ~ w-computation can have an empty stack. For i > 1, let Pt, P~, "", Pj~ be the list U~, and let ym be the stack in configuration P,~. For arbitrary r, if Yr, Y~+~, ' "", Y~+k3-~ are the same, then by condition (3), there is a configuration in the sequence V between P~ and P~+k3-~ whose stack is shorter than y~. Thus one of the configurations between P~ and P~+ks-~ must have a stack of length i - 1 with the stack head at the top of the stack. This configuration is in list Ui-z If y,., y~+~,.., y~ k3-z are not the same, then certainly the rightmost symbol of y~ must have been erased. We again conclude the D existence of a configuration between P~ and I ~+k3-~ in sequence V which is also in U~_i. By letting r = 1, k3 + 1, 2k3 + 1,.-., we conclude that ji < k3j~_~ + k~. Since j0 _< 1, we have kl kl ~j~ _< ~ 2k~ ~ < (2~) ~+~. i=0 j=o We have arrived at a contradiction and the lemma is thus proved. The three conditions of Lemma 1 will hereafter be referred to as "conditions (1), (2), and (3)." 5. Main Results We now prove lemmas to the effect that conditions (1), (2), and (3) are satisfied for some 2NSA (2DSA) accepting each language that is accepted by an arbitrary Journal of the Association for Computing Machinery, Vol. 16, No. 4, October 1969

6 Halting Sta& Automata 555 2NSA (2DSA). The constructions involved are each based oil the ability of a stack automaton to store on its stack numbers whose size is proportional to the length of the input. The numbers are stored as blocks of l's. (See [2] for details of how these numbers may be stored and manipulated by a stack automaton.) LEMMA 2. Let S be a 2NSA ; N(S) = L. Then there is a 2NSA St, with N(S1) = L, satisfying condition (1). PROOF. Let S have s states and t tape symbols, and let w be input to S; ] w I = n. Suppose that there is a w-computation V = Q1, Q2,..., Qk, and E is the stack in configuration Qk Suppose also that some configuration of V has stack y = ZiZ2 2. 2~82 n 2 Z~, where m > s ~n z, and y is as long as any stack string found in V. To each i, 1 < i < m, we can associate a 6-tuple (Z~, ql, pl, di, ei, Md such that (1) before, or at the time, y first became the stack string, the last configuration of computation V in which the length of the stack was i was (q~, w, ZiZ2 Z. d~ 0); (2) after y first became the stack string, the first configuration of V in which the stack had length i - 1 was (p~, w, Z1Z2. Z~_t, e~, O) ; (3) Mi is the transition matrix describing ZtZ2... Zi. Now, the number of possible values that each component of the above 6-tuple can assume is, respectively, t, s, s, n, n, and 2 *~2. Thus, there are at most s2tn22"2" distinct 6-tuples, and there must exist it and i2, with i2 > ix, whose associated 6- tuples are the same, say (Z, q, p, d, e, M). Thus w, Zo, 1, o) [-* (q, w, Yl z, d, 0) I*- (q, w, y~z, d, O) 1"- (p,w,y~,e,o) [*- (p,w, yl,e,o), where yl = Zt ~Z ~.-. Zh_t and y~ = ZtZ2... Zi2-x. The same transition matrix describes both yt and y~. Thus, for any string x, one transition matrix describes both y~x and y2x. (In [2] it is shown that for any symbol Y, the transition matrices describing yty and y2y are the same. This argument obviously extends to the present case.) Since in computation V no symbol of y2 is erased between configurations (q, w, y2z, d, 0) and (p, w, y2, e, 0) above, it follows that (q, w, ytz, d, 0) I*- (P, w, Yt, e, 0) by a sequence of configurations which are the same as those in computation V between the configurations (q, w, y2z, d, 0) and (q, w, y2, e, 0), except that each stack string y2x is replaced by ytx. We can thus construct a new w-computation V' leading to an empty stack, such that either (1) the longest stack of V' is shorter than the longest stack of V, or (2) the longest stacks of V' and V are of the same length but fewer configurations of V' than of V have stacks of that length. If stacks of length greater than s~tn22 ~*n~ remain in sequence V', we can apply the above argument as many times as necessary to show the existence of a w-computation, leading to an empty stack, in which the stack length never exceeds 2. 2~82n 2 S Sn z. There is an integer b such that b "~ >_ s2tn~2 ~2 for all n > 2. We construct a 2NSA $1 which simulates S. Below each stack symbol of S, $1 keeps a count in base b of how many symbols of S were on St's stack when that symbol was printed. If the counter exceeds b ~, $1 halts. Journal of the Association for Computing Machinery, Vol. 16, No. 4,.October 1969

7 556 J.D. ULLMAN Ix ICOUNTI.., ] input IcOUNTF~7-TT~-T~7 o I :o "4~cA"roRI :1 1'~2V'I~LHLOC~TOR~ FIG. 2. Stack of S~ BLOCK t BLOCK 2 BLOCK n... FIG. 3. Count format The format of Sl's stack is shown in Figure 2. X0 is S~'s end-of-stack marker. The counts are base-b numbers of length n ~, with marker symbols * after every n digits. The n digits between markers form a "block." The base-b number of length n 2 (with possible leading zeros) rn,rn2_~... rl is stored low-order digits first, as shown in Figure 3. The input locators are blocks whose use we mention later. $1 simulates the moves of S within the stack directly, ignoring counts and input locators. That is, when S moves its stack head, $1 moves in the same direction until it comes to a stack symbol of S. Likewise, when Sl's stack head is at the top of the stack, it can simulate moves of S which do not cause a symbol to be printed or erased. If S can erase, $1 erases not only the stack symbol of S but the count and input locator below, so that a stack symbol of S is again the rightmost symbol of Sl's stack. If S can print a symbol Z at the top of its stack, $1 will eventually do the same. But first, S, moves its input head to the left endmarker, printing a symbol X on its stack every time it moves left. The symbol X is used for no other purpose, and the block of X's is the "input locator." That is, the length of the block equals the number of positions to the left of S's input head. $1 must now place above the input locator a new count, which is one greater than the old count. S~ will pick up one digit of the old count at a time, modify it if necessary to reflect the addition of 1 to the count, and place the resulting digit on top of the stack. Digits are picked up low-order first. To add 1, Sl initially changes digit b - 1 to 0 until some digit d < b - 1 is found. This digit is changed to d + 1, and no subsequent changes are made. If all digits are b - 1, then 1 cannot be added, and the count has "overflowed." We now describe how to copy an arbitrary digit of the count; it can be modified if necessary, according to the above algorithm. Suppose $1 has copied i complete blocks of the count and j symbols of the (i + 1)-th block, O_<i,j<n. 1. S~ finds the nth complete block from the top of the stack by counting marker symbols *. That is, with its stack head at the top and input head at the left endmarker, S~ repeatedly moves its stack head left, moving its input head one cell right each time $1 encounters * on its stack. When S~'s input head reaches the right endmarker, its stack head has found n - 1 complete blocks and goes one block further down to find the (i + 1)-th block of the old count. 2. $1 nondeterministically chooses a symbol in the (i + 1)-th block of the old count in such a way that each symbol of the block is chosen in some computation. S, positions its input head as many cells from the left endmarker as the symbol chosen is from the left end of block i S~ brings its stack head to the top, keeping its input head fixed, and compares the position of its input head with the length of the unfinished block to verify that the (j + 1)-th symbol has been chosen. If not, $1 halts. Since S~ nonde- Journal of the Association for Computing Machinery, Vol, 16, No. 4, October 1969

8 Halting Stack Automata 557 tcrministically chooses every symbol, exactly one choice allows $1 to continue. S~ prints the correct choice of symbol, changing it if the addition algorithm so dictates 4. St uses the length of its input to cheek if the new (i + 1)-th block now contains n symbols. If so, * is printed and $1 checks if n blocks have been copied. If not, $1 copies another digit. 5. When all n ~ symbols have been copied or changed, S~ restores the input head position of S, using the input loeator. It is possible that when St attempted to add I to the count, the count overflowed, n2 because the previous count was b - 1. If so, St halts $1 initializes the count by printing n blocks of n 0's below the first stack symbol of S. St will simulate all and only those computations of S in which S's stack length does not exceed b~; so St satisfies condition (1). We have already argued that if S accepts an input, then it does so by a computation in which the stack length satis. ties that bound. Thus N(S1) = N(S). The argument of Lemma 2 does not work for the 2DSA, as the method of incrementing the count was nondeterministie in nature However, the number of transition functions is not as large as the number of transition matrices, and a 2DSA can count high enough to "know" if it will grow its stack forever LEMMA 3. Let S be a 2DSA ; N ( S) = L. Then there is a 2DSA $1, with N ( Si) = L, satisfying condition (1). PROOF. Let S have s states and t tape symbols, and let w be input to S, [ w I = n. The number of transition functions is (sn --t- 1)'~, the number of functions of stateposition pairs into the state-position pairs plus ~. By an argument similar to that used in Lemma 2, we can show that if the stack grows longer than sst(sn + 1) 'n, then S is in a loop and will never accept. There is an integer b such that (bn) b~ > s2t(sn + 1) '~ for all n > 2. St can store a count and input locator below each stack symbol of S. The count is represented by bn blocks of from 0 to bn - 1 l's each. The blocks are separated by markers *. The integer represented by blocks of length it, i2,, ibm, from lowest to highest on the stack, respectively, is it + (bn )i2 + (bn )~i (bn)b~-lib~. St simulates S as in Lemma 2. If S prints a symbol, St must store an input locator and copy the count, adding 1. To do so, S~ repeatedly moves bn + 1 blocks down the stack, using its input length b times to count n block markers. $1 then stores the length of the block found, using its finite control and input head position. That is, if the length is an + l, 0 < a < b, 0 _< l < n, then a is stored by the finite control and I by the position of the input head. This block can then be copied, incremented by 1, or set to 0, in accordance with the obvious algorithm. St can also cheek if bn blocks have been copied. If the count overflows, $1 halts. N(S1) = N(S), and $1 satisfies condition (1). The next step is to show that condition (2) can always be satisfied. To make sure that a 2NSA or 2DSA never "gets lost" in the stack, we construct a new stack automaton which has a transition matrix or function below each stack symbol. The new stack automaton does not simulate the original stack automaton when the latter leaves the top of the stack but will inspect the transition matrix or function to determine if the original will get lost in the stack, or in what state-position pairs it can be when it next reaches the top of the stack. We treat the 2DSA first. LEMMX 4. If L = N(S), for some 2DSA S, then there exists a 2DSA S1, with N( S1) = L, satisfying conditions (1) and (2). Journal of the Association for Computing Machinery, Vol, 16, No. 4, October 1969 ~i~ ~ i~ii ~!~ i

9 58 J.D. ULLMAN TRANSITION FUNCTION E$CRIBII',~G Z I Z2,.* Z i FIG. 4. Segmen of S1's stack PROOF. Assume, by Lemma 3, that S satisfies condition (i). Let the states of be ql, q=, " ', q,, and let w be S's input, ] w [ = n. $i will simulate S. If S eners a configuration with stack Z1Z2... Z,,, then below each Z~, 1 < i < m, will e a segment of tape of the form shown in Figure 4. $1 will display the transition function ~ below the stack symbol of S in the order #(ql, n)... ~(q~, n).-. ~(ql, 2)...,(q~, 2),(ql, 1)... ~(q,, 1), rom lowest to highest on the stack. The sn entries are separated by the marker symol *. If ~(q~, j) = ~, then the symbol ~ appears on the stack in the place reserved or ~(q~,j). If #(q~,j) = (p, k), then the symbol p, followed by k 1% appears in the lace reserved for ~(q~, j). Sx has its own bottom-of-stack marker. It initially places the transition function escribing e, that is, sn ~'s separated by *'s, on its stack below S's end-of-stack marker. S~ then uses the algorithm delineated below to construct the transition unction describing S's bottom-of-stack marker, and then prints that symbol above s transition function. If S makes a move in which the stack head remains at the top of the stack, S~ oes likewise, changing the state of S which S~ has recorded in its finite control. If erases a stack symbol, $i erases that symbol and alt data below it, up to but not ncluding the next stack symbol of S. Suppose S is in state qz and moves left from the top of the stack. S~ must reference he transition function ~ found immediately below the rightmost stack symbol of S o determine if S returns to the top of the stack and, if so, what S's state and input osition are. If S~ has its input head at position j, it must find the block on its stack eserved for ~(qz, j). It does so by repeatedly moving s markers (*) down its stack nd one symbol left on its input until the left endmarker is reached. By then moving - l markers further, it arrives at the block for ~(qz, j). If this block holds the ymbol ~, S~ halts, since S will never return to the top of the stack and thus fails to ccept. If the block holds a state of S and/c l's, $1 records this state as the new state f S and moves its input head to position k. Then S~ returns to the top of the ack and proceeds to simulate another move of S. Lastly, when S prints a new symbol Z, S~ must construct the transition function describing the new stack string of S. First, S~ stores its input position on the stack a block of the symbol X, a symbol used for no other purpose. This block is the put locator of Figure 4. $1 will calculate and print the sn values for ~' in the same der as they appear in ~. We describe below how S1 computes a particular entry (q,, j). St keeps two variables, q* and j*. The variable q* is a state of S and is stored in ~'s control; j* is an integer between 1 and n and is stored as the input position of 1. Initially, q* = q~ and j* = j. (Note that qi and j are determined by the number blocks of ~' which have already been printed.) The following operation is done peatedly: 1. S~ determines what S would do in state q*, scanning Z on the stack, with its put at position j*. Four cases arise. urnal of the Association for Computing Machinery, Vol. 16, No. 4, October 1969

10 Halting Stack Automata 559 a. No move is possible. Then t~'(q~, j) = f. $1 may consider the next value of the arguments of t~'. (In cases (b), (c), and (d), S has a move. The state of S after this move becomes q* and the new input position of S becomes j*.) b. S keeps its stack head stationary. $1 does step 2. e. S moves its stack head right. In ~his ease, t~'(q~, j) is (q*, j*), and we are finished. d. S moves its stack head left. After obtaining the new q* and j*, $i can reference t~ to find t~(q*, j*). If t~(q*, j*) = ~, then t~'(q~, j) = ~. If ~(q*, j*) = (p, k), then q* and j* are set to p and k, respectively. $1 then does step $1 prints a symbol Y on top of the stack. Then $1 temporarily stores j* on the stack as a block of another symbol W and cheeks that it has not printed more than sn Y's since the computation of t~'(q~, J) began. If not, $1 restores j* to the input, erasing the W's, and repeats step 1. If more than sn Y's have been printed, step 1 has been done more than en times. Thus step 1 was done twice with the same q and j*, say q* = q' and j* = j'. In this case, ~'(ql, j) = ~,, since if S is in configuration (q~, w, yz, j, 1), where y is any stack string described by t~, S will enter the configuration (q', w, yz, j', 1) repeatedly without reaching the top of the stack.! In the manner described above, S1 can compute t~ (q~, j) for any q~ and j. The Y's on the stack are then erased, and $1 uses its input length to cheek if all sn values of t~' have been computed. If so, $1 prints Z on top of its stack and proceeds to simulate another move of S. A proof that N(S) = N(Si) by induction on the number of moves made by S is straightforward. Below each stack symbol of S, $1 places at most n(n -I- 3) other symbols; so if S satisfies condition (1), $1 will do likewise. In addition, whenever $1 leaves the top of the stack, it either returns or halts; so $1 satisfies condition (2). We use the strategy of Lemma 4 to construct a 2NSA satisfying condition (2). However, the procedure whereby a transition matrix M' describing yz is constructed from a matrix M describing y is quite complicated. The n n matrix M will be dis- played on the stack in the order m~... mn~*... *m 2~ ".. m 21 * mln "'" ill, the 's marking blocks of n symbols with the same first subscript. We make use of the fact that a Turing machine can compute M' from M displayed in this manner without leaving the region in which M is stored on its tape. Then a 2NSA with input of length n can simulate the Turing machine, resulting in a usable representation of M'. The constructions used are from [2]. They are only sketched here. LEMMA 5. Let S = ( K, ~, F, ~, ~b, qo, Zo, ~, $, F) be a 2NSA and Z be in F. There is a deterministic Turing machine Tz which when started with an arbitrary string of the form ~w$m~... m~l... m2~... m21ml~... mn on its tape, where [ w$ [ = n and the m's are elements of a transition matrix M = [mi~], will halt with wsm'n~! f!!! m~j "'" m2~ "'" mnm1,~ -.. mn on its tape. M' [m~] is that transition matrix for S with input ~w$ which describes yz if M describes y. While performing this computation, Tz does not leave the region of its tape on which data was originally placed. PROOF. We briefly describe the algorithm whereby M' is constructed from M. The implementation of the algorithm on a Turing machine which does not leave the region holding data is described in [2].! ] Let M' = [m~j]. In order to compute m~j, Tz must determine if it is true that W s8. (q, ~ $, yz, ~, 1) I- (P, Cw$,YZ,3, O) for each qand p in K. "Q1 I-< Q2 '' will mean Q1 I- Q2 by a sequence of moves m which the stack head never reaches the top of Journal of the Association for Computing Machinery, Vol. 16, No. 4, October 1969

11 ~0 J.D. ULLMAN the stack." To begin, Tz must determine the set P of pairs (ql, il) such that (q, w$, yz, i, 1) I ~ (ql, w$, yz, il, 1). P is computed reeursively as follows: 1. Place (q, i) in P. Pairs in P have been either considered or not. The pair (q, i) has not been considered at this point. 2. If there is a pair (ql, il) in P which has not been considered, do the following. a. Find those pairs (q2, i2) such that (ql, ~w$, yz, it, 1) ~ (q2, w$, yz, is, 1). The value of ~(qt, a, Z), where a is the i~th symbol of ~w$, determines this set. Any such (q2, i2) which is not in P is adjoined to P. b. Find those pairs (q2, is) such that for some q3 in K and integer i~, (q~, w$, yz, it, 1) ~ ( q~, ~w$, yz, i3, 2 ), w$, y, 1) I w$, y, is, 0). These conditions are determined by ~ and m~3 ~, respectively, and together imply that (q~, w$, yz, il, 1) t < (q~, w$, yz, i2, 1). Any such (q2, i2) not in P is adjoined to P. c. The pair (ql, /.1) has now been considered. New pairs added to P have not been considered. Repeat step When all pairs in P have been considered, use ~ to determine if for some (qt, i~) in P, (qt, w$, yz, i~, 1) ~- (p, w$, yz, j, 0). This condition is equivalent to (q, p) in m~j. L~M~A 6. Let T be a halting Turing machine which never leaves the region in which data is initially placed on its tape. There is a halting 2NSA S which when given an input of length n and a tape string A1A2 A~s+~ of T on its stack in the format poata~ ".. A,*A~+iA~+2... A2~*... *A,2+tA~+2... A~2+,~ (the *'s are marker symbols and po is the start state of Tz) will print above this on its stack the tape string of T which results when T is started with this string and allowed to operate until it halts. S will then enter a designated state, and will not enter this state otherwise. Between the two strings there will be additional stack symbols of S. PROOF. S will simulate moves of T by printing above a given configuration of T (tape string with state of T to the left of the symbol scanned by its head and *'s after each n tape symbols of T) the configuration which results from one move of T. S picks up one tape symbol at a time and brings it to the top in the same way counts were copied in Lemma 2. However, S must here move n + 1 blocks down the stack instead of n, because configurations of T use n + 1 blocks while counts use only n. Each time S copies a tape symbol of T it must check if it was the symbol changed by T or if T's head moves left and scans that symbol in the next configuration. If so, S prints the new state of T in the proper position, as well as the tape symbol of T. When T has no further move, S completes that configuration and enters its designated state. More detail on this construction can be found in [2]. LEMMA 7. IlL = N(S) for some 2NSA S, then L = N(S1) for some 2NSA St satisfying conditions (1) and (2). PROOF. Assume S satisfies condition (1), by Lemma 2. St will simulate S; below each stack symbol Z of S will appear the data shown in Figure 5. Since S is nondeterministic, S~ chooses an allowable move of S to simulate. As in Lemma 4, S~ directly simulates moves of S in which the stack head remains at the top of the Journal of the Association for Computing Machinery, Vol. 16, No. 4, October 1969

12 Halting Stack Automata 561 NTAL I... Zl ONFIGURATION SIMULATION OF T z MATRIX DESCRIBING FIG. 5. Stack of S~ stack. If S erases a symbol, $1 erases that symbol also, as well as everything below it up to the next stack symbol of S. If S moves left from the top of the stack, $1 records S's new state, say q, in its finite control and has S's input position, say j, as its own. The transition matrix M = [m~k] describing S's current stack is found immediately below the rightmost stack symbol of S in the order mn~ "" real*... *m2n "'" m~l*mln... ran. $1 finds the jth block of M by moving its stack head repeatedly left and its input head one position left each time * is encountered on its stack, until the left endmarker is reached on the input. Then S~ nondetermiaistically chooses an element in the jth block, say mix, at the same time moving its input head to position i. S~ next nondeterministically chooses a state p such that (q, p) is in mj~ and records p as the new state of S. If no such p exists, S~ halts. Finally, $1 brings its stack head to the top. $l thus returns to the top of the stack in exactly those pairs of state of S and input position in which M indicates S can return. Finally, if S chooses to print a symbol Z, S~ does the following. We suppose that w is on Sl's input tape, I w I = n. 1. S~ stores its input position as a block of a special symbol (the input locator). 2. S~ constructs the representation of the initial configuration of the Turing machine Tz of Lemma 5. First, S~ prints the initial state of Tz, copies its input onto the stack, and prints a marker, *. Then $1 copies the transition matrix using the technique of Lemma 2 whereby counts were copied. 3. Using the technique of Lemma 6, S~ simulates Tz until Tz halts. 4. Then $1 copies the final configuration of Tz except for the state of Tz and the leftmost n tape symbols of Tz (which are equal to S~'s input). The method of copying is again that of Lemma Finally, S~ prints the stack symbol printed by S, restores its input position, adjusts it to account for the move of S, and records the new state of S. S~ is now ready to simulate another move of S. Observe that in the copying procedure of Lemma 2, whenever the stack head leaves the top of the stack, no matter what choices are made, the 2NSA must either halt or return to the top of the stack. The same is true of the procedure whereby the transition matrix is referenced to simulate stack scans of S. Finally, the simulation of Tz in Lemma 6 is by a halting 2NSA. Thus S~ satisfies condition (2). Moreover, S satisfies condition (1), and the length of S~'s stack is bounded by the length of S's stack times a factor which depends on the length of the input. Thus $1 satisfies condition (1). A proof that N(S1) = N(S) by induction on the number of moves made by S is straightforward. THEOREM 1. If L = N(S) for some 2NSA S, then L = N(S~), where Si is a halting 2N SA. PROOF. By Lemma 7, assume S satisfies conditions (1) and (2). Let S have s states. We construct S~ to simulate S and keep count below each stack symbol of the number of moves S has made at the top of the stack while that symbol was the rightmost stack symbol. The count is stored as a block of l's. Journal of the Association for Computing Machinery, Vol. 16, No. 4, October 1969

13 562 J.D. ULLMAN Let w be S's input, [w ] = n. If S makes more than sn moves at the top of the stack for which the stack content is the same, then for two of these moves, S must have been in the same state, with its input head at the same position. Thus, if w is accepted, there is a shorter sequence of moves of S leading to acceptance; so St may halt without accepting should any count go above sn. St has its own bottom-of-stack marker and initially prints S's bottom-of-stack marker. Strictly speaking, there is a count of 0 (no l's) below that symbol. Moves of S within the stack are simulated directly; St ignores the counts, moving through them to find stack symbols of S. If S erases a symbol, St erases that symbol and the count below it. If S makes a move at the top of the stack other than an erase, St first erases the top symbol of the stack, adds 1 to the count below it, and restores the symbol erased. St then stores the position of its input head on the stack as a block of tile symbol X, used only for that purpose. St then uses the length of its input to check that the count it has just incremented is no greater than sn. If it does excee d sn, St halts without accepting. Next, S~ erases the block of X's, restoring its input head position as it does so. Finally, S~ simulates the move of S. When St prints a symbol, the count below it is initially 0. That N(S~) = N(S) follows from the observation that if S accepts by a sequence of moves in which more than sn moves are made at the top of the stack with the same stack content, then there is a shorter sequence of moves of S leading to acceptance. The stack of St never gets longer than (s + 1)n times the length of S's stack; so S1 satisfies condition (1). When testing counts, if St leaves the top of the stack it either halts or returns; so St satisfies condition (2). Finally, St satisfies condition (3). For if St is at the top of the stack and a symbol of S is the rightmost stack symbol, $1 makes at most one move at the top of the stack (the printing of a symbol X to begin storing an input locator) and then erases that symbol (as part of the simulation of a move of S). If a 1 is the rightmost stack symbol, S~ makes at most two moves (a move printing a stack symbol of S and one printing another 1), and then erases the 1 (when the stack symbol of S above it is erased). If X is the rightmost stack symbol, St makes one move at the top of the stack (the printing of another X or a move left), and then erases the X. Thus, by Lemma 1, St is halting. THEOREM 2. If L = N(S) for a 2DSA S, then L = N(Si), where $1 is a halting 2DSA. PI~OOF. As in Theorem 1, St keeps count of the number of moves made by S with a given symbol as the rightmost on the stack. If S makes more than sn moves without erasing that symbol, then S is in a loop and cannot erase its stack. (Here, s is the number of states of S; n is the length of the input.) The argument proceeds as in Theorem Consequences of the Main Result One simple result which follows from Theorem 2 is THEOREM 3. If L ~ ~* is N( S) for some 2DSA S, then L = ~* - L is accepted by a 2DSA. (I.e. 2DSA languages are closed under complementation.) PROOF. By Theorem 2, assume S is halting. Construct St to accept L by final Journal of the Association for Computing Machinery, Vol. 16~ No. 4, October 1969

14 Halting Stack Automata 563 state. S~ simulates S. If S halts with a lmnempty stack, St enters a final state. Otherwise, Si's stack will eventually become empty, and S~ will not accept. Other closure properties of 2NSA and 2DSA requiring Theorems 1 and 2 are given in [8]. In particular, the halting property is required to show closure of the 2DSA languages under e-free homomorphism. REFERENCES 1, GINSBURG, S.. (JrREIBACH. S. A.. AND HARRISON, M. A. Stack automata and compiling. J. ACM 14, 1 (Jan, 1967), HOPCROFT, J. E., AND ULLMAN, J. ]). Nonerasing stack automata. J. Comput. Syst. Sci. 1, 2 (Aug. 1967), GINSDORG, S., GREIBACH, S. A., AND }IARRISON, M.A. One-way stack automata. J. ACM 14, 2 (Apr. 1967), HOPCROFT, J. E., AND ULLMAN, J.D. Sets accepted by one-way stack automata are context sensitive. [njorm. Contr. 15, 2 (Aug. 1968), HOPCROFT, J. E., AND ULLMAN, J. D. Deterministic stack automata and the quotient operator. J. Comput. Syst. Sci. 2, 1 (June 1968), GRAY, J. N., HARRISON, M. A., AND IBARR~t, O. Two-way pushdown automata. Inform. Contr. 11 (1967) HOPCROFT, J. E., AND ULLMAN, J. D. Some results on tape-bounded Turing machines. J. ACM 16, 1 (Jan. 1969), GINSnUR(~, S., AND HOPCROFT, J. E. Two-way balloon automata and AFL. Rep. TM 738/042/00, System Development Corp., Santa Monica, Calif. O. HOPCROFT, Z. E., AND ULLMAN, Z.D. Formal Languages and Their Relation to Automata. Addison-Wesley, Reading, Mass., RECEIVED FEBRUARY, 1968; REVISED SEPTEMBER, 1968 Journal of the Association for Computing Machinery, Vol. 16, No. 4, October 1969

Turing Machines. A transducer is a finite state machine (FST) whose output is a string and not just accept or reject.

Turing Machines. A transducer is a finite state machine (FST) whose output is a string and not just accept or reject. Turing Machines Transducers: A transducer is a finite state machine (FST) whose output is a string and not just accept or reject. Each transition of an FST is labeled with two symbols, one designating

More information

A Note on the Succinctness of Descriptions of Deterministic Languages

A Note on the Succinctness of Descriptions of Deterministic Languages INFORMATION AND CONTROL 32, 139-145 (1976) A Note on the Succinctness of Descriptions of Deterministic Languages LESLIE G. VALIANT Centre for Computer Studies, University of Leeds, Leeds, United Kingdom

More information

Limited Automata and Unary Languages

Limited Automata and Unary Languages Limited Automata and Unary Languages Giovanni Pighizzini and Luca Prigioniero Dipartimento di Informatica, Università degli Studi di Milano, Italy {pighizzini,prigioniero}@di.unimi.it Abstract. Limited

More information

ECS 120 Lesson 16 Turing Machines, Pt. 2

ECS 120 Lesson 16 Turing Machines, Pt. 2 ECS 120 Lesson 16 Turing Machines, Pt. 2 Oliver Kreylos Friday, May 4th, 2001 In the last lesson, we looked at Turing Machines, their differences to finite state machines and pushdown automata, and their

More information

TAFL 1 (ECS-403) Unit- V. 5.1 Turing Machine. 5.2 TM as computer of Integer Function

TAFL 1 (ECS-403) Unit- V. 5.1 Turing Machine. 5.2 TM as computer of Integer Function TAFL 1 (ECS-403) Unit- V 5.1 Turing Machine 5.2 TM as computer of Integer Function 5.2.1 Simulating Turing Machine by Computer 5.2.2 Simulating Computer by Turing Machine 5.3 Universal Turing Machine 5.4

More information

Variants of Turing Machines

Variants of Turing Machines November 4, 2013 Robustness Robustness Robustness of a mathematical object (such as proof, definition, algorithm, method, etc.) is measured by its invariance to certain changes Robustness Robustness of

More information

A Characterization of the Chomsky Hierarchy by String Turing Machines

A Characterization of the Chomsky Hierarchy by String Turing Machines A Characterization of the Chomsky Hierarchy by String Turing Machines Hans W. Lang University of Applied Sciences, Flensburg, Germany Abstract A string Turing machine is a variant of a Turing machine designed

More information

1. Draw the state graphs for the finite automata which accept sets of strings composed of zeros and ones which:

1. Draw the state graphs for the finite automata which accept sets of strings composed of zeros and ones which: P R O B L E M S Finite Autom ata. Draw the state graphs for the finite automata which accept sets of strings composed of zeros and ones which: a) Are a multiple of three in length. b) End with the string

More information

Theory of Languages and Automata

Theory of Languages and Automata Theory of Languages and Automata Chapter 3- The Church-Turing Thesis Sharif University of Technology Turing Machine O Several models of computing devices Finite automata Pushdown automata O Tasks that

More information

Turing Machine Languages

Turing Machine Languages Turing Machine Languages Based on Chapters 23-24-25 of (Cohen 1997) Introduction A language L over alphabet is called recursively enumerable (r.e.) if there is a Turing Machine T that accepts every word

More information

Decision Properties for Context-free Languages

Decision Properties for Context-free Languages Previously: Decision Properties for Context-free Languages CMPU 240 Language Theory and Computation Fall 2018 Context-free languages Pumping Lemma for CFLs Closure properties for CFLs Today: Assignment

More information

CT32 COMPUTER NETWORKS DEC 2015

CT32 COMPUTER NETWORKS DEC 2015 Q.2 a. Using the principle of mathematical induction, prove that (10 (2n-1) +1) is divisible by 11 for all n N (8) Let P(n): (10 (2n-1) +1) is divisible by 11 For n = 1, the given expression becomes (10

More information

THE TRANSITIVE REDUCTION OF A DIRECTED GRAPH*

THE TRANSITIVE REDUCTION OF A DIRECTED GRAPH* SIAM J. COMPUT. Vol. 1, No. 2, June 1972 THE TRANSITIVE REDUCTION OF A DIRECTED GRAPH* A. V. AHO, M. R. GAREY" AND J. D. ULLMAN Abstract. We consider economical representations for the path information

More information

Computer Sciences Department

Computer Sciences Department 1 Reference Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER 3 D E C I D A B I L I T Y 4 Objectives 5 Objectives investigate the power of algorithms to solve problems.

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

(Refer Slide Time: 0:19)

(Refer Slide Time: 0:19) Theory of Computation. Professor somenath Biswas. Department of Computer Science & Engineering. Indian Institute of Technology, Kanpur. Lecture-15. Decision Problems for Regular Languages. (Refer Slide

More information

COMPUTER THE DESIGN AND ANALYSIS OF ALGO R ITH M S. John E H opm Cornell University Jeffrey D. Ulhn Princeton University. Ahd V. Ah0 Bell Laboratories

COMPUTER THE DESIGN AND ANALYSIS OF ALGO R ITH M S. John E H opm Cornell University Jeffrey D. Ulhn Princeton University. Ahd V. Ah0 Bell Laboratories THE DESIGN AND ANALYSIS OF COMPUTER ALGO R ITH M S Ahd V. Ah0 Bell Laboratories John E H opm Cornell University Jeffrey D. Ulhn Princeton University A W Addison-Wesley Publishing Company Reading, Massachusetts

More information

Complete Variable-Length "Fix-Free" Codes

Complete Variable-Length Fix-Free Codes Designs, Codes and Cryptography, 5, 109-114 (1995) 9 1995 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. Complete Variable-Length "Fix-Free" Codes DAVID GILLMAN* gillman @ es.toronto.edu

More information

Outline. Language Hierarchy

Outline. Language Hierarchy Outline Language Hierarchy Definition of Turing Machine TM Variants and Equivalence Decidability Reducibility Language Hierarchy Regular: finite memory CFG/PDA: infinite memory but in stack space TM: infinite

More information

Enumerations and Turing Machines

Enumerations and Turing Machines Enumerations and Turing Machines Mridul Aanjaneya Stanford University August 07, 2012 Mridul Aanjaneya Automata Theory 1/ 35 Finite Sets Intuitively, a finite set is a set for which there is a particular

More information

Parameterized Complexity of Independence and Domination on Geometric Graphs

Parameterized Complexity of Independence and Domination on Geometric Graphs Parameterized Complexity of Independence and Domination on Geometric Graphs Dániel Marx Institut für Informatik, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany. dmarx@informatik.hu-berlin.de

More information

ONE-STACK AUTOMATA AS ACCEPTORS OF CONTEXT-FREE LANGUAGES *

ONE-STACK AUTOMATA AS ACCEPTORS OF CONTEXT-FREE LANGUAGES * ONE-STACK AUTOMATA AS ACCEPTORS OF CONTEXT-FREE LANGUAGES * Pradip Peter Dey, Mohammad Amin, Bhaskar Raj Sinha and Alireza Farahani National University 3678 Aero Court San Diego, CA 92123 {pdey, mamin,

More information

Lecture 2 Finite Automata

Lecture 2 Finite Automata Lecture 2 Finite Automata August 31, 2007 This lecture is intended as a kind of road map to Chapter 1 of the text just the informal examples that I ll present to motivate the ideas. 1 Expressions without

More information

Multi-Cluster Interleaving on Paths and Cycles

Multi-Cluster Interleaving on Paths and Cycles Multi-Cluster Interleaving on Paths and Cycles Anxiao (Andrew) Jiang, Member, IEEE, Jehoshua Bruck, Fellow, IEEE Abstract Interleaving codewords is an important method not only for combatting burst-errors,

More information

Introduction to Computers & Programming

Introduction to Computers & Programming 16.070 Introduction to Computers & Programming Theory of computation 5: Reducibility, Turing machines Prof. Kristina Lundqvist Dept. of Aero/Astro, MIT States and transition function State control A finite

More information

CIT3130: Theory of Computation. Regular languages

CIT3130: Theory of Computation. Regular languages ƒ CIT3130: Theory of Computation Regular languages ( M refers to the first edition of Martin and H to IALC by Hopcroft et al.) Definitions of regular expressions and regular languages: A regular expression

More information

2.8 Universal Turing Machines and the Halting Problem

2.8 Universal Turing Machines and the Halting Problem 2.8 Universal Turing Machines and the Halting Problem Through the hierarchy of Slide 74 we indirectly get a sense that Turing Machines are at least as computationally powerful as any other known model

More information

CS5371 Theory of Computation. Lecture 8: Automata Theory VI (PDA, PDA = CFG)

CS5371 Theory of Computation. Lecture 8: Automata Theory VI (PDA, PDA = CFG) CS5371 Theory of Computation Lecture 8: Automata Theory VI (PDA, PDA = CFG) Objectives Introduce Pushdown Automaton (PDA) Show that PDA = CFG In terms of descriptive power Pushdown Automaton (PDA) Roughly

More information

Recursively Enumerable Languages, Turing Machines, and Decidability

Recursively Enumerable Languages, Turing Machines, and Decidability Recursively Enumerable Languages, Turing Machines, and Decidability 1 Problem Reduction: Basic Concepts and Analogies The concept of problem reduction is simple at a high level. You simply take an algorithm

More information

CSCI 3130: Formal Languages and Automata Theory Lecture 16 The Chinese University of Hong Kong, Fall 2010

CSCI 3130: Formal Languages and Automata Theory Lecture 16 The Chinese University of Hong Kong, Fall 2010 CSCI 3130: Formal Languages and Automata Theory Lecture 16 The Chinese University of Hong Kong, Fall 2010 Andrej Bogdanov The Church-Turing thesis states that the Turing Machine is as capable as any realistic

More information

p x i 1 i n x, y, z = 2 x 3 y 5 z

p x i 1 i n x, y, z = 2 x 3 y 5 z 3 Pairing and encoding functions Our aim in this part of the course is to show that register machines can compute everything that can be computed, and to show that there are things that can t be computed.

More information

ALGORITHMIC DECIDABILITY OF COMPUTER PROGRAM-FUNCTIONS LANGUAGE PROPERTIES. Nikolay Kosovskiy

ALGORITHMIC DECIDABILITY OF COMPUTER PROGRAM-FUNCTIONS LANGUAGE PROPERTIES. Nikolay Kosovskiy International Journal Information Theories and Applications, Vol. 20, Number 2, 2013 131 ALGORITHMIC DECIDABILITY OF COMPUTER PROGRAM-FUNCTIONS LANGUAGE PROPERTIES Nikolay Kosovskiy Abstract: A mathematical

More information

ACONCURRENT system may be viewed as a collection of

ACONCURRENT system may be viewed as a collection of 252 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 10, NO. 3, MARCH 1999 Constructing a Reliable Test&Set Bit Frank Stomp and Gadi Taubenfeld AbstractÐThe problem of computing with faulty

More information

Verifying a Border Array in Linear Time

Verifying a Border Array in Linear Time Verifying a Border Array in Linear Time František Franěk Weilin Lu P. J. Ryan W. F. Smyth Yu Sun Lu Yang Algorithms Research Group Department of Computing & Software McMaster University Hamilton, Ontario

More information

Lecture 7: Primitive Recursion is Turing Computable. Michael Beeson

Lecture 7: Primitive Recursion is Turing Computable. Michael Beeson Lecture 7: Primitive Recursion is Turing Computable Michael Beeson Closure under composition Let f and g be Turing computable. Let h(x) = f(g(x)). Then h is Turing computable. Similarly if h(x) = f(g 1

More information

Limitations of Algorithmic Solvability In this Chapter we investigate the power of algorithms to solve problems Some can be solved algorithmically and

Limitations of Algorithmic Solvability In this Chapter we investigate the power of algorithms to solve problems Some can be solved algorithmically and Computer Language Theory Chapter 4: Decidability 1 Limitations of Algorithmic Solvability In this Chapter we investigate the power of algorithms to solve problems Some can be solved algorithmically and

More information

Material from Recitation 1

Material from Recitation 1 Material from Recitation 1 Darcey Riley Frank Ferraro January 18, 2011 1 Introduction In CSC 280 we will be formalizing computation, i.e. we will be creating precise mathematical models for describing

More information

Eliminating the Storage Tape in Reachability Constructions

Eliminating the Storage Tape in Reachability Constructions Eliminating the Storage Tape in Reachability Constructions Oscar H. Ibarra Department of Computer Science University of California Santa Barbara, CA 93106, USA Zhe Dang School of Electrical Engineering

More information

AXIOMS FOR THE INTEGERS

AXIOMS FOR THE INTEGERS AXIOMS FOR THE INTEGERS BRIAN OSSERMAN We describe the set of axioms for the integers which we will use in the class. The axioms are almost the same as what is presented in Appendix A of the textbook,

More information

Final Course Review. Reading: Chapters 1-9

Final Course Review. Reading: Chapters 1-9 Final Course Review Reading: Chapters 1-9 1 Objectives Introduce concepts in automata theory and theory of computation Identify different formal language classes and their relationships Design grammars

More information

To illustrate what is intended the following are three write ups by students. Diagonalization

To illustrate what is intended the following are three write ups by students. Diagonalization General guidelines: You may work with other people, as long as you write up your solution in your own words and understand everything you turn in. Make sure to justify your answers they should be clear

More information

Reflection in the Chomsky Hierarchy

Reflection in the Chomsky Hierarchy Reflection in the Chomsky Hierarchy Henk Barendregt Venanzio Capretta Dexter Kozen 1 Introduction We investigate which classes of formal languages in the Chomsky hierarchy are reflexive, that is, contain

More information

6.045J/18.400J: Automata, Computability and Complexity. Practice Quiz 2

6.045J/18.400J: Automata, Computability and Complexity. Practice Quiz 2 6.045J/18.400J: Automata, omputability and omplexity March 21, 2007 Practice Quiz 2 Prof. Nancy Lynch Elena Grigorescu Please write your name in the upper corner of each page. INFORMATION ABOUT QUIZ 2:

More information

Equivalence of NTMs and TMs

Equivalence of NTMs and TMs Equivalence of NTMs and TMs What is a Turing Machine? Similar to a finite automaton, but with unlimited and unrestricted memory. It uses an infinitely long tape as its memory which can be read from and

More information

Verification in Loosely Synchronous Queue-Connected Discrete Timed Automata

Verification in Loosely Synchronous Queue-Connected Discrete Timed Automata Verification in Loosely Synchronous Queue-Connected Discrete Timed Automata Oscar H. Ibarra, Zhe Dang and Pierluigi San Pietro Department of Computer Science University of California, Santa Barbara, CA

More information

Power Set of a set and Relations

Power Set of a set and Relations Power Set of a set and Relations 1 Power Set (1) Definition: The power set of a set S, denoted P(S), is the set of all subsets of S. Examples Let A={a,b,c}, P(A)={,{a},{b},{c},{a,b},{b,c},{a,c},{a,b,c}}

More information

Regular Languages (14 points) Solution: Problem 1 (6 points) Minimize the following automaton M. Show that the resulting DFA is minimal.

Regular Languages (14 points) Solution: Problem 1 (6 points) Minimize the following automaton M. Show that the resulting DFA is minimal. Regular Languages (14 points) Problem 1 (6 points) inimize the following automaton Show that the resulting DFA is minimal. Solution: We apply the State Reduction by Set Partitioning algorithm (särskiljandealgoritmen)

More information

CS402 - Theory of Automata Glossary By

CS402 - Theory of Automata Glossary By CS402 - Theory of Automata Glossary By Acyclic Graph : A directed graph is said to be acyclic if it contains no cycles. Algorithm : A detailed and unambiguous sequence of instructions that describes how

More information

Lecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017

Lecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017 COMPSCI 632: Approximation Algorithms August 30, 2017 Lecturer: Debmalya Panigrahi Lecture 2 Scribe: Nat Kell 1 Introduction In this lecture, we examine a variety of problems for which we give greedy approximation

More information

Theory of Programming Languages COMP360

Theory of Programming Languages COMP360 Theory of Programming Languages COMP360 Sometimes it is the people no one imagines anything of, who do the things that no one can imagine Alan Turing What can be computed? Before people even built computers,

More information

We can create PDAs with multiple stacks. At each step we look at the current state, the current input symbol, and the top of each stack.

We can create PDAs with multiple stacks. At each step we look at the current state, the current input symbol, and the top of each stack. Other Automata We can create PDAs with multiple stacks. At each step we look at the current state, the current input symbol, and the top of each stack. From all of this information we decide what state

More information

Scanning Regular Languages by Dual Finite Automata* ABSTRACT

Scanning Regular Languages by Dual Finite Automata* ABSTRACT Scanning Regular Languages by Dual Finite Automata* Pei-Chi Wu Feng-Jian Wang Kai-Ru Young Institute of Computer Science and Information Engineering National Chiao Tung University 1001 Ta Hsueh Road, Hsinchu,

More information

Parameterized graph separation problems

Parameterized graph separation problems Parameterized graph separation problems Dániel Marx Department of Computer Science and Information Theory, Budapest University of Technology and Economics Budapest, H-1521, Hungary, dmarx@cs.bme.hu Abstract.

More information

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph.

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph. Trees 1 Introduction Trees are very special kind of (undirected) graphs. Formally speaking, a tree is a connected graph that is acyclic. 1 This definition has some drawbacks: given a graph it is not trivial

More information

CS 275 Automata and Formal Language Theory. First Problem of URMs. (a) Definition of the Turing Machine. III.3 (a) Definition of the Turing Machine

CS 275 Automata and Formal Language Theory. First Problem of URMs. (a) Definition of the Turing Machine. III.3 (a) Definition of the Turing Machine CS 275 Automata and Formal Language Theory Course Notes Part III: Limits of Computation Chapt. III.3: Turing Machines Anton Setzer http://www.cs.swan.ac.uk/ csetzer/lectures/ automataformallanguage/13/index.html

More information

Theory of Computation, Homework 3 Sample Solution

Theory of Computation, Homework 3 Sample Solution Theory of Computation, Homework 3 Sample Solution 3.8 b.) The following machine M will do: M = "On input string : 1. Scan the tape and mark the first 1 which has not been marked. If no unmarked 1 is found,

More information

We show that the composite function h, h(x) = g(f(x)) is a reduction h: A m C.

We show that the composite function h, h(x) = g(f(x)) is a reduction h: A m C. 219 Lemma J For all languages A, B, C the following hold i. A m A, (reflexive) ii. if A m B and B m C, then A m C, (transitive) iii. if A m B and B is Turing-recognizable, then so is A, and iv. if A m

More information

From Theorem 8.5, page 223, we have that the intersection of a context-free language with a regular language is context-free. Therefore, the language

From Theorem 8.5, page 223, we have that the intersection of a context-free language with a regular language is context-free. Therefore, the language CSCI 2400 Models of Computation, Section 3 Solutions to Practice Final Exam Here are solutions to the practice final exam. For some problems some details are missing for brevity. You should write complete

More information

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions.

THREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions. THREE LECTURES ON BASIC TOPOLOGY PHILIP FOTH 1. Basic notions. Let X be a set. To make a topological space out of X, one must specify a collection T of subsets of X, which are said to be open subsets of

More information

Formal Languages and Compilers Lecture IV: Regular Languages and Finite. Finite Automata

Formal Languages and Compilers Lecture IV: Regular Languages and Finite. Finite Automata Formal Languages and Compilers Lecture IV: Regular Languages and Finite Automata Free University of Bozen-Bolzano Faculty of Computer Science POS Building, Room: 2.03 artale@inf.unibz.it http://www.inf.unibz.it/

More information

More on Polynomial Time and Space

More on Polynomial Time and Space CpSc 8390 Goddard Fall15 More on Polynomial Time and Space 20.1 The Original NP-Completeness Proof A configuration/snapshot of a machine is a representation of its current state (what info would be needed

More information

CL i-1 2rii ki. Encoding of Analog Signals for Binarv Symmetric Channels A. J. BERNSTEIN, MEMBER, IEEE, K. STEIGLITZ, MEMBER,

CL i-1 2rii ki. Encoding of Analog Signals for Binarv Symmetric Channels A. J. BERNSTEIN, MEMBER, IEEE, K. STEIGLITZ, MEMBER, IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. IT-12, NO. 4, OCTOBER 1966 425 Encoding of Analog Signals for Binarv Symmetric Channels A. J. BERNSTEIN, MEMBER, IEEE, K. STEIGLITZ, MEMBER, IEEE, AND J. E.

More information

Math 170- Graph Theory Notes

Math 170- Graph Theory Notes 1 Math 170- Graph Theory Notes Michael Levet December 3, 2018 Notation: Let n be a positive integer. Denote [n] to be the set {1, 2,..., n}. So for example, [3] = {1, 2, 3}. To quote Bud Brown, Graph theory

More information

IS BINARY ENCODING APPROPRIATE FOR THE PROBLEM-LANGUAGE RELATIONSHIP?

IS BINARY ENCODING APPROPRIATE FOR THE PROBLEM-LANGUAGE RELATIONSHIP? Theoretical Computer Science 19 (1982) 337-341 North-Holland Publishing Company NOTE IS BINARY ENCODING APPROPRIATE FOR THE PROBLEM-LANGUAGE RELATIONSHIP? Nimrod MEGIDDO Statistics Department, Tel Aviv

More information

CS 125 Section #4 RAMs and TMs 9/27/16

CS 125 Section #4 RAMs and TMs 9/27/16 CS 125 Section #4 RAMs and TMs 9/27/16 1 RAM A word-ram consists of: A fixed set of instructions P 1,..., P q. Allowed instructions are: Modular arithmetic and integer division on registers; the standard

More information

On the Recursion-Theoretic Complexity of Relative Succinctness of Representations of Languages

On the Recursion-Theoretic Complexity of Relative Succinctness of Representations of Languages INFORMATION AND CONTROL 52, 2--7 (1982) On the Recursion-Theoretic Complexity of Relative Succinctness of Representations of Languages Louise HAY Department of Mathematics, Statistics, and Computer Science,

More information

Definition: A context-free grammar (CFG) is a 4- tuple. variables = nonterminals, terminals, rules = productions,,

Definition: A context-free grammar (CFG) is a 4- tuple. variables = nonterminals, terminals, rules = productions,, CMPSCI 601: Recall From Last Time Lecture 5 Definition: A context-free grammar (CFG) is a 4- tuple, variables = nonterminals, terminals, rules = productions,,, are all finite. 1 ( ) $ Pumping Lemma for

More information

Framework for Design of Dynamic Programming Algorithms

Framework for Design of Dynamic Programming Algorithms CSE 441T/541T Advanced Algorithms September 22, 2010 Framework for Design of Dynamic Programming Algorithms Dynamic programming algorithms for combinatorial optimization generalize the strategy we studied

More information

Formal languages and computation models

Formal languages and computation models Formal languages and computation models Guy Perrier Bibliography John E. Hopcroft, Rajeev Motwani, Jeffrey D. Ullman - Introduction to Automata Theory, Languages, and Computation - Addison Wesley, 2006.

More information

arxiv: v1 [cs.cc] 9 Jan 2015

arxiv: v1 [cs.cc] 9 Jan 2015 Efficient Computation by Three Counter Machines arxiv:1501.02212v1 [cs.cc] 9 Jan 2015 Holger Petersen Reinsburgstr. 75 70197 Stuttgart Germany May 13, 2018 Abstract Weshow that multiplication can be done

More information

3186 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 9, SEPTEMBER Zero/Positive Capacities of Two-Dimensional Runlength-Constrained Arrays

3186 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 51, NO. 9, SEPTEMBER Zero/Positive Capacities of Two-Dimensional Runlength-Constrained Arrays 3186 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 51, NO 9, SEPTEMBER 2005 Zero/Positive Capacities of Two-Dimensional Runlength-Constrained Arrays Tuvi Etzion, Fellow, IEEE, and Kenneth G Paterson, Member,

More information

Solutions to Homework 10

Solutions to Homework 10 CS/Math 240: Intro to Discrete Math 5/3/20 Instructor: Dieter van Melkebeek Solutions to Homework 0 Problem There were five different languages in Problem 4 of Homework 9. The Language D 0 Recall that

More information

arxiv: v4 [math.co] 25 Apr 2010

arxiv: v4 [math.co] 25 Apr 2010 QUIVERS OF FINITE MUTATION TYPE AND SKEW-SYMMETRIC MATRICES arxiv:0905.3613v4 [math.co] 25 Apr 2010 AHMET I. SEVEN Abstract. Quivers of finite mutation type are certain directed graphs that first arised

More information

Finite Automata. Dr. Nadeem Akhtar. Assistant Professor Department of Computer Science & IT The Islamia University of Bahawalpur

Finite Automata. Dr. Nadeem Akhtar. Assistant Professor Department of Computer Science & IT The Islamia University of Bahawalpur Finite Automata Dr. Nadeem Akhtar Assistant Professor Department of Computer Science & IT The Islamia University of Bahawalpur PhD Laboratory IRISA-UBS University of South Brittany European University

More information

Name: CS 341 Practice Final Exam. 1 a 20 b 20 c 20 d 20 e 20 f 20 g Total 207

Name: CS 341 Practice Final Exam. 1 a 20 b 20 c 20 d 20 e 20 f 20 g Total 207 Name: 1 a 20 b 20 c 20 d 20 e 20 f 20 g 20 2 10 3 30 4 12 5 15 Total 207 CS 341 Practice Final Exam 1. Please write neatly. You will lose points if we cannot figure out what you are saying. 2. Whenever

More information

Treewidth and graph minors

Treewidth and graph minors Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under

More information

Final Exam 2, CS154. April 25, 2010

Final Exam 2, CS154. April 25, 2010 inal Exam 2, CS154 April 25, 2010 Exam rules. he exam is open book and open notes you can use any printed or handwritten material. However, no electronic devices are allowed. Anything with an on-off switch

More information

A Formal Study of Practical Regular Expressions

A Formal Study of Practical Regular Expressions International Journal of Foundations of Computer Science c World Scientific Publishing Company A Formal Study of Practical Regular Expressions Cezar Câmpeanu Department of Mathematics and Computer Science,

More information

A matching of maximum cardinality is called a maximum matching. ANn s/2

A matching of maximum cardinality is called a maximum matching. ANn s/2 SIAM J. COMPUT. Vol. 2, No. 4, December 1973 Abstract. ANn s/2 ALGORITHM FOR MAXIMUM MATCHINGS IN BIPARTITE GRAPHS* JOHN E. HOPCROFT" AND RICHARD M. KARP The present paper shows how to construct a maximum

More information

Basics of Graph Theory

Basics of Graph Theory Basics of Graph Theory 1 Basic notions A simple graph G = (V, E) consists of V, a nonempty set of vertices, and E, a set of unordered pairs of distinct elements of V called edges. Simple graphs have their

More information

COMPUTABILITY THEORY AND RECURSIVELY ENUMERABLE SETS

COMPUTABILITY THEORY AND RECURSIVELY ENUMERABLE SETS COMPUTABILITY THEORY AND RECURSIVELY ENUMERABLE SETS JOSHUA LENERS Abstract. An algorithm is function from ω to ω defined by a finite set of instructions to transform a given input x to the desired output

More information

On the Complexity of Multi-Dimensional Interval Routing Schemes

On the Complexity of Multi-Dimensional Interval Routing Schemes On the Complexity of Multi-Dimensional Interval Routing Schemes Abstract Multi-dimensional interval routing schemes (MIRS) introduced in [4] are an extension of interval routing schemes (IRS). We give

More information

Lecture 4: 3SAT and Latin Squares. 1 Partial Latin Squares Completable in Polynomial Time

Lecture 4: 3SAT and Latin Squares. 1 Partial Latin Squares Completable in Polynomial Time NP and Latin Squares Instructor: Padraic Bartlett Lecture 4: 3SAT and Latin Squares Week 4 Mathcamp 2014 This talk s focus is on the computational complexity of completing partial Latin squares. Our first

More information

Optimal Variable Length Codes (Arbitrary Symbol Cost and Equal Code Word Probability)* BEN VARN

Optimal Variable Length Codes (Arbitrary Symbol Cost and Equal Code Word Probability)* BEN VARN INFORMATION AND CONTROL 19, 289-301 (1971) Optimal Variable Length Codes (Arbitrary Symbol Cost and Equal Code Word Probability)* BEN VARN School of Systems and Logistics, Air Force Institute of Technology,

More information

VERTICES OF LOCALIZED IMBALANCE IN A BIASED GRAPH

VERTICES OF LOCALIZED IMBALANCE IN A BIASED GRAPH PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 101, Number 1, September 1987 VERTICES OF LOCALIZED IMBALANCE IN A BIASED GRAPH THOMAS ZASLAVSKY ABSTRACT. A biased graph consists of a graph T and

More information

STABILITY AND PARADOX IN ALGORITHMIC LOGIC

STABILITY AND PARADOX IN ALGORITHMIC LOGIC STABILITY AND PARADOX IN ALGORITHMIC LOGIC WAYNE AITKEN, JEFFREY A. BARRETT Abstract. Algorithmic logic is the logic of basic statements concerning algorithms and the algorithmic rules of deduction between

More information

A SHARKOVSKY THEOREM FOR VERTEX MAPS ON TREES

A SHARKOVSKY THEOREM FOR VERTEX MAPS ON TREES A SHARKOVSKY THEOREM FOR VERTEX MAPS ON TREES CHRIS BERNHARDT Abstract. Let T be a tree with n vertices. Let f : T T be continuous and suppose that the n vertices form a periodic orbit under f. We show:

More information

(p 300) Theorem 7.27 SAT is in P iff P=NP

(p 300) Theorem 7.27 SAT is in P iff P=NP pp. 292-311. The Class NP-Complete (Sec. 7.4) P = {L L decidable in poly time} NP = {L L verifiable in poly time} Certainly all P is in NP Unknown if NP is bigger than P (p. 299) NP-Complete = subset of

More information

3 Fractional Ramsey Numbers

3 Fractional Ramsey Numbers 27 3 Fractional Ramsey Numbers Since the definition of Ramsey numbers makes use of the clique number of graphs, we may define fractional Ramsey numbers simply by substituting fractional clique number into

More information

Timo Latvala. January 28, 2004

Timo Latvala. January 28, 2004 Reactive Systems: Kripke Structures and Automata Timo Latvala January 28, 2004 Reactive Systems: Kripke Structures and Automata 3-1 Properties of systems invariants: the system never reaches a bad state

More information

CISC 4090 Theory of Computation

CISC 4090 Theory of Computation CISC 4090 Theory of Computation Turing machines Professor Daniel Leeds dleeds@fordham.edu JMH 332 Alan Turing (1912-1954) Father of Theoretical Computer Science Key figure in Artificial Intelligence Codebreaker

More information

COVERING SPACES AND SUBGROUPS OF THE FREE GROUP

COVERING SPACES AND SUBGROUPS OF THE FREE GROUP COVERING SPACES AND SUBGROUPS OF THE FREE GROUP SAMANTHA NIEVEEN AND ALLISON SMITH Adviser: Dennis Garity Oregon State University Abstract. In this paper we will use the known link between covering spaces

More information

Turing Machines, continued

Turing Machines, continued Previously: Turing Machines, continued CMPU 240 Language Theory and Computation Fall 2018 Introduce Turing machines Today: Assignment 5 back TM variants, relation to algorithms, history Later Exam 2 due

More information

A Reduction of Conway s Thrackle Conjecture

A Reduction of Conway s Thrackle Conjecture A Reduction of Conway s Thrackle Conjecture Wei Li, Karen Daniels, and Konstantin Rybnikov Department of Computer Science and Department of Mathematical Sciences University of Massachusetts, Lowell 01854

More information

Lecture 1. 1 Notation

Lecture 1. 1 Notation Lecture 1 (The material on mathematical logic is covered in the textbook starting with Chapter 5; however, for the first few lectures, I will be providing some required background topics and will not be

More information

Introduction to the Lambda Calculus

Introduction to the Lambda Calculus Introduction to the Lambda Calculus Overview: What is Computability? Church s Thesis The Lambda Calculus Scope and lexical address The Church-Rosser Property Recursion References: Daniel P. Friedman et

More information

NP-Completeness of 3SAT, 1-IN-3SAT and MAX 2SAT

NP-Completeness of 3SAT, 1-IN-3SAT and MAX 2SAT NP-Completeness of 3SAT, 1-IN-3SAT and MAX 2SAT 3SAT The 3SAT problem is the following. INSTANCE : Given a boolean expression E in conjunctive normal form (CNF) that is the conjunction of clauses, each

More information

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989

P Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989 University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and

More information

Outline. Introduction. 2 Proof of Correctness. 3 Final Notes. Precondition P 1 : Inputs include

Outline. Introduction. 2 Proof of Correctness. 3 Final Notes. Precondition P 1 : Inputs include Outline Computer Science 331 Correctness of Algorithms Mike Jacobson Department of Computer Science University of Calgary Lectures #2-4 1 What is a? Applications 2 Recursive Algorithms 3 Final Notes Additional

More information

Actually talking about Turing machines this time

Actually talking about Turing machines this time Actually talking about Turing machines this time 10/25/17 (Using slides adapted from the book) Administrivia HW due now (Pumping lemma for context-free languages) HW due Friday (Building TMs) Exam 2 out

More information