Time-Space Tradeoffs, Multiparty Communication Complexity, and Nearest-Neighbor Problems

Size: px
Start display at page:

Download "Time-Space Tradeoffs, Multiparty Communication Complexity, and Nearest-Neighbor Problems"

Transcription

1 Time-Space Tradeoffs, Multiparty Communication Complexity, and Nearest-Neighbor Problems Paul Beame Computer Science and Engineering University of Washington Seattle, WA Erik Vee Computer Science and Engineering University of Washington Seattle, WA ABSTRACT We extend recent techniques for time-space tradeoff lower bounds using multiparty communication complexity ideas. Using these arguments, for inputs from large domains we prove larger tradeoff lower bounds than previously known for general branching programs, yielding time lower bounds of the form Ì Å Ò ÐÓ Òµ when space Ë Ò, up from Ì Å Ò ÐÓ Òµ for the best previous results. We also prove the first unrestricted separation of the power of general and oblivious branching programs by proving that È, which is trivial on general branching programs, has a time-space tradeoff of the form Ì Å Ò ÐÓ Ò˵µ on oblivious branching programs. Finally, using time-space tradeoffs for branching programs, we improve the lower bounds on query time of data structures for nearest neighbor problems in dimensions from Å ÐÓ Òµ, proved in the cell-probe model [8, 5], to Å µ or Å Ô ÐÓ ÐÓ ÐÓ µ or even Å ÐÓ µ (depending on the metric space involved) in slightly less general but more reasonable data structure models. 1. INTRODUCTION Recently, the first non-trivial time-space tradeoff lower bounds have been shown for decision problems in P [7, 1, 2, 6]. These results are the culmination of almost two decades of analysis of branching programs, natural generalizations of decision trees to directed graphs that provide elegant models of both non-uniform time Ì and space Ë simultaneously. The key ideas in these recent papers extend notions from 2-party communication complexity previously used in the study of restricted branching programs, such as oblivious branching programs [3] or read- branching programs [9], to general branching programs. In this paper we extend and improve these results in several directions. We develop a new lower bound criterion, based on extending 2-party communication complexity ideas to multiparty communication complexity, that applies to general branching programs. We show that if a function is not constant on large embed- Research supported by NSF grants CCR and CCR Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. STOC 02, May 19-21, 2002, Montreal, Quebec, Canada. Copyright 2002 ACM /02/ $5.00. ded cylinder intersections then it has a large time-space tradeoff lower bound. This generalizes a lower bound technique from [7, 6] based on analyzing functions on large embedded rectangles. Applying this criterion to an explicit Boolean function based on a multilinear form over for suitable, we show lower bounds that yield Ì Å Ò ÐÓ Òµ when Ë Ò ÐÓ for large input domain. This improves the best lower bounds for general branching programs and matches the best lower bounds known even for oblivious branching programs. As a warm up for our argument we give an alternative, conceptually simple proof, based on the ideas in the recent lower bounds for general branching programs, of the relationship between multiparty communication complexity and time-space tradeoffs for oblivious branching programs shown by Babai, Nisan, and Szegedy [4]. Using this we obtain time-space tradeoff lower bounds of the form Ì Å Ò ÐÓ Ò˵µ for È on oblivious branching programs. Since È, the canonical complete problem for L, has a trivial general branching program of time Ò and width Ò (and therefore space Ç ÐÓ Òµ), this provides the first separation between general and oblivious branching program computation. (Separations have previously been shown between oblivious readonce branching programs and read-once branching programs, but not in the general case.) Finally, extending the observation of Miltersen, Nisan, Safra, and Wigderson [13] that small space branching programs are natural choices for static data structures and, thus, that query lower bounds are related to time-space tradeoff lower bounds, we develop lower bounds for nearest-neighbor problems in a variety of metric spaces Í µ. In the nearest neighbor problem we are given a database of Ò points in Í and a query point Ü Í and must determine the closest element to Ü in the database. We analyze the -near neighbor problem, a decision version of this problem, that as observed in [8], is at least as easy as the nearest-neighbor problem. The obvious algorithms for either problem are simply to store the elements of the database themselves and compare the query Ü in turn to each element of the database, using Ç Òµ words and Ç Òµ query time, or, if Í is small, to pre-compute the answers to all possible queries using Ç Í µ space and constant query time. More complex algorithms achieve query time polynomial in and ÐÓ Ò in spaces with larger Í but they still require Ò Å µ storage in the worst case (see [8] for an overview of these algorithms). In general, for large dimensions, no better algorithms are known. It is generally conjectured that there is no nearest neighbor data structure having Òµ Ç µ memory cells of size ÐÓ Òµ Ç µ and query-time ÐÓ Òµ Ç µ. However, even small lower bounds are of interest since in certain applications, such as semantic indexing, is the number of terms, on the order of tens of thousands, and Ò is the number of texts, on the order of millions to billions.

2 Borodin, Ostrovsky, and Rabani [8], show that even for computation in Yao s strong cell-probe model with cells that may contain up to ÐÓ Òµ Ç µ bits, solving the -near neighbor problem on the Hamming cube ¼ requires query time Å ÐÓ Òµ if the number of words is polynomial in Ò. Barkol and Rabani [5], have even extended these results to randomized computation but the results are still far from the upper bounds. While lower bounds in the cell probe model apply to the broadest possible class of data structure algorithms, it is not reasonable to expect that data structure algorithms can implement all the features of the cell probe model which includes, at zero cost, the ability to have a completely different access algorithm for each instantiation of the query and the ability to remember the entire history of the computation given the query. We consider slightly more restricted data structure algorithms with word size Ç ÐÓ Òµ that are allowed to make decisions based on one coordinate Ü Í of the query at a time (at unit cost) and which use only Òµ Ó µ bits of extra space during their execution. In this model, for certain -dimensional metric spaces, we prove Ô query time lower bounds of the form Å ÐÓ µ, or Å ÐÓ ÐÓ ÐÓ µ, depending on the space. These results follow easily from time-space tradeoff lower bounds for general branching programs. The set Í in each of the metric spaces in these lower bounds is of size for some constant and it would be interesting if we could extend these bounds to the Hamming space ¼ considered in [8, 5]. We are able to do this in the special case that the data structure corresponds to an oblivious branching program; that is, the bits of the query Ü are accessed in the same order, no matter what the values of these bits are. In this case we obtain a query time lower bound of the form Å ÐÓ µ. This lower bound is based on a careful combinatorial construction of a suitable database. 2. DEFINITIONS Throughout this paper, will denote a finite set and Ò a positive integer. We use Ò to denote the set Ò and Ò to denote ¼ Ò. We view the input space, Æ, as the set of maps from Æ to ; we normally take Æ to be Ò, and write Ò for Ò. If Æ, a point is a partial input on. For a partial input, ÚÖ µ denotes the set of indices for which makes an assignment, and ÙÒ Ø µ denotes the set Æ ÚÖ µ. If and are partial inputs with ÚÖ µ ÚÖ µ, then denotes the partial input on ÚÖ µ ÚÖ µ that agrees with and ÚÖ µ and with on ÚÖ µ. For Ü Æ and Æ, the projection Ü of Ü onto is the partial input on that agrees with Ü. For Ë Æ, Ë Ü Ü Ë. If is a function with domain Æ and is a partial input, the restriction of by, denoted, is the function with domain ÙÒ Ø µ defined by the rule µ µ for ÙÒ Ø µ. We adopt the usual definitions of deterministic branching programs. A -way branching program is a directed acyclic graph satisfying the following: There is a unique vertex with in-degree 0, called the start node. The sink nodes are labeled with an output value. Every non-sink node is labeled with a variable name. The out-degree of every non-sink node, Ú, is precisely (we allow multi-edges), and every value from is assigned to precisely one out-edge from Ú. A branching program computes a function in the same way a decision tree does. Time, Ì, is the length of the longest consistently labeled path from the start node to a sink node. The size of a branching program is the number of its nodes. Space, Ë, is the base 2 logarithm of size. Any lower bound proven for a branching program also implies a lower bound for other computational models, such as Turing machines and Random Access Machines. A leveled branching program is a branching program in which the underlying graph is leveled. By a result of Pippenger [14], making a branching program leveled does not change Ì and adds at most ÐÓ Ì to Ë. An oblivious branching program is a leveled branching program in which all the nodes on each level are labeled with the same variable. Call the sequence of variables reached at each level the query sequence of the oblivious branching program. Multiparty communication complexity, introduced by Chandra, Furst and Lipton [10] in order to study oblivious branching programs, is an extension of the usual 2-party communication complexity. Suppose that Ô parties, each with unlimited computational power, wish to exchange information to compute the value of Ò ¼, whose input has been divided according È È È Ô, a Ô-partition of the Ò variables Ü Ü Ò. The -th party receives the value of every Ü except for those in È. The parties exchange information by writing bits one at a time on a common blackboard according to a protocol which specifies, based on the bits on the blackboard, who is to write next. The multiparty communication complexity of with respect to the fixed partition È, È µ, is the minimum number of bits required to be written. We also define the best partition Ô-party communication complexity of, Ô Ø µ, to be the minimum fixed-partition communication complexity of, taken over all Ô-partitions of the inputs into equal size sets. Several lower bounds on the multiparty communication complexity of Boolean functions have been shown in [10, 4, 11, 12, 15] in the fixed partition model. The lower bound techniques for multiparty communication complexity developed following [4] are an extension of the lower bound techniques for 2-party communication complexity which rely on analyzing the properties of functions on large combinatorial rectangles. In the case of Ô-party communication complexity, lower bounds are proven by analyzing properties of functions on large cylinder intersections, which are sets of the form Ô where each ¼ È Æ is a cylinder in that it only depends on the variables read by party. 3. MULTIPARTY COMMUNICATION COMPLEXITY AND BRANCHING PROGRAMS 3.1 Oblivious Branching Programs There is a close relationship between multiparty communication complexity and oblivious branching program complexity. DEFINITION 3.1. Given a sequence of values from a finite set Æ and a partition È of Æ ¼ Æ, the number of alternations of with respect to È is the minimal Ö such that can be written as Ö and each contains no elements from at least one class in È. Each is called an alternation of with respect to È. PROPOSITION 3.1. [10, 3, 4] Let Æ ¼, let be a partial assignment to the variables of, and let È be a partition of ÙÒ Ø µ. If there is an oblivious branching program computing that has width Ï and whose query sequence has at most Ö alternations with respect to È, then Ö µ ÐÓ Ï È µ. PROOF. Associate one party with each È È. Each party has access to and. Suppose that the query sequence of is Ö where each is an alternation with respect to È and does not contain any element from the class È È. The parties all

3 follow the computation beginning with the start node of. For Ö, party follows the path in until the end of the query sequence and writes the name of the node of reached at the end of on the blackboard. This is possible since any query to ÚÖ µ can be answered by any party and any query to ÙÒ Ø µ avoids È and so can be answered by party. For Ö the name of the node requires at most ÐÓ Ï bits since has width at most Ï. For Ö this requires only 1 bit to yield the value of. The following lemma, an extension of one found in [7, 6], is an alternative to the approach based on generalized meanders in [4]. LEMMA 3.2. Let be a sequence of Ò elements from Ò. Divide into Ö equal segments Ö each of length ÒÖ. Independently assign each to one of Ô sets Ô uniformly at random with ÈÖ Ô. Define a random variable to be the number of elements of Ò not appearing in any segment in and let µ. Then Ò Ô and È Ö Ô Ö. PROOF. We first calculate the expectation. For any Ò, the probability that element never appears in any segment in is Ôµ Ø µ, where Ø µ is the number of segments in which appears. Since Ô, this probability is at least Ø µô. Thus Ò È µ Ø µô Ò Ò Ø µ ÔÒµ Ò Ô The second inequality follows from the arithmetic-geometric mean inequality, and the last equality È follows from the fact that the sequence is of length Ò, hence Ò Ø µ Ò. We next Ëbound the variance. Let be the event that variable Ò where here we identify each segment with the set of elements it contains. Further, for ¼ Ò, write ¼ if and ¼ appear in the same segment at least once. Then Î Ö µ ¼ ÈÖ ¼ ÈÖ ÈÖ ¼ µ If and ¼ never appear in a segment together, then the events and ¼ are independent, and the term ÈÖ ¼ ÈÖ ÈÖ ¼ µ is 0. If and ¼ do appear together in at least one segment, we upper bound the corresponding term by ÈÖ Ôµ Ø µ. Since each segment contains at most ÒÖ elements of Ò, the number of ¼ such that ¼ is bounded above by Ø µòö. So Î Ö µ Ò Ö Ò ¼ ÈÖ Ö Ò Ø µ Ò ¼ Ò Ø µ Ôµ Ø µ Ôµ Ø µ ÒÖ where the third inequality follows since Ø µ and Ôµ Ø µ are positive and anti-correlated. Applying Chebyshev s inequality we have È Ö Î Ö µ Ô Ö µ as required. Combining Lemma 3.2 with Proposition 3.1, we have THEOREM 3.3. Let Ò ¼ and let be an oblivious branching program that computes in time Ì Ò and width Ï. Then there is a subset of the variables Æ ¼ Ò of size Ò Ô Ò such that for any partial input on Æ ¼, Ô Ô ÐÓ Ï Ô Ø µ PROOF. Apply Lemma 3.2 with Ö Ô Ô to the query sequence of. This divides into Ö blocks of height ÒÖ corresponding to the segments of. Given a set of such blocks, let ÙÒ Ò µ be the set of variables not queried at any level in. By Lemma 3.2, for Ô, the probability that a random assignment of these blocks to sets Ô has ÙÒ Ò µ Ô Ò is less than Ô Ö Ô. Therefore by the probabilistic method there exists an assignment of blocks to the sets such that all Ô sets have ÙÒ Ò µ Ô Ò. Under this assignment, the ÙÒ Ò µ do not necessarily form a partition, since the sets may overlap. However, it is straightforward to choose Ô disjoint sets, Í Í Í Ô, with the property that Í ÙÒ Ò µ and Í Ô Ô Ò for all parties,. Ë Now, set Æ ¼ Ò Ò Í. Then the Í form an equal-size Ô- partition, È of Ò Æ ¼. Also, by construction, the query sequence of has at most Ö alternations with respect to È. Let be any partial assignment on Æ ¼. Proposition 3.1 immediately yields Ô Ô ÐÓ Ï È µ Ø Ô µ as required. 3.2 Lower Bounds for È Recall that È is simply the problem of Ø-connectivity for directed graphs that have out-degree one. More formally, for Ü Ò Ò, we define È Ò Ò Ò ¼ by È Ò Üµ if and only if there is a sequence of indices, such that, Ò and for all, we have Ü. (Notice that under this encoding, the vertices and Ø correspond to indices and Ò, respectively, and that the value of Ü Ò does not affect the value of È Ò Üµ.) To prove lower bounds for È we will use Ô-party communication complexity lower bounds previously shown for generalized inner product (ÁÈ), which is given by ÁÈ Î ÑÔ Þ Þ Þ Ôµ Ô Þ where each Þ ¼ Ñ and Þ is the -th bit Ä Ñ of Þ. PROPOSITION 3.4. [4] Let È be the uniform partition in which each Þ Þ Ñ is a single class. Then È ÁÈ ÑÔµ = Å Ñ Ô µ. THEOREM 3.5. Let Æ ¼ Ò with Æ ¼ Ò Ñ µô. Then there is a partial assignment, Ò Æ¼, such that Ô Ø È Òµ Å Ñ Ô µ PROOF. We give a reduction from ÁÈ ÑÔ based on the simple ordered binary decision diagram (OBDD) (a variant of an oblivious read-once branching program in which edges may skip over levels, see e.g. [16]),, of width 2 with ÔÑ nodes that computes ÁÈ ÑÔ. Set to be the partial assignment that puts a self-loop at all vertices corresponding to indices in Æ ¼. If Æ ¼, let node Ò Æ ¼ ; otherwise let be some node in Ò Æ ¼ and set µ. Given a best-partition Ô-party communication complexity protocol for È Ò, let È È È Ô be the partition of Ò Æ ¼ into Ô equal-sized classes used by this protocol and assume w.l.o.g. that È. We embed the nodes of in Ò Æ ¼ µ Ò by mapping the start node of to, mapping the remaining nodes in that read each Þ to the nodes of È, and mapping the 1-sink node of to Ò and the 0-sink to. Given an input Þ, we fix the out-edges of the embedded to obtain a graph Þ for which È Þµ ÁÈ ÑÔ Þµ. Since each party can construct the part of Þ it needs, we have a Ô-party communication complexity protocol solving ÁÈ ÑÔ under the uniform partition È ¼. Hence, Ô Ø È Òµ È ¼ ÁÈ ÑÔµ Å Ñ Ô µ by Proposition 3.4.

4 THEOREM 3.6. If an oblivious branching program with time Ì and space Ë solves È Ò, then Ì Å Ò ÐÓ Ò˵µ. PROOF. Let ÌÒ. From Theorem 3.3, for any Ô, there is a Æ ¼ of size Ò Ô Ò such that, for any partial assignment on Æ ¼, Ô Ô ÐÓ Ï Ô Ø È Òµ. Further, from Theorem 3.5, we know that for the uniform partition È, Ô Ø È Òµ È ÁÈ ÑÔµ for Ñ Ô Ò Ôµ. Combining this with Proposition 3.4, we see that Ô Ô ÐÓ Ï È ÁÈ ÑÔµ ¼ Ñ Ô ¼ Ô Ô ÒÔ for some constant ¼ ¼. Rearranging and using Ô ÐÓ Ô, setting Ô Ô in order to minimize Ô Ô, using Ë ÐÓ Ï, and taking logarithms we obtain ÐÓ Ò˵ ¼¼Ô for some constant ¼¼ ¼. Therefore Ì Ò Ò ÐÓ Ò˵ for some ¼ as required. 3.3 General Branching Programs We say that a subset Æ is an embedded Ô-cylinder intersection iff there exist ÔË disjoint subsets Ô Æ, a partial assignment to Æ Ô, and sets Ì ÙÒ Ø µ for Ô Ô such that µ. Following [6], we call the the feet of, the spine of, and the the legs of. Ôµ is called a footprint of ; it is balanced if Ô, and ordered if Ô, where for sets and if every element of is smaller than every element of. The foot-size Ñ µ is ÑÒ and the density Æ µ is ÙÒ Ø µ. Embedded rectangles are embedded 2-cylinder intersections. Here, we extend one of the approaches to obtaining time-space tradeoff lower bounds in [7, 6] from one based on embedded rectangles to one based on embedded Ô-cylinder intersections. THEOREM 3.7. Let Ô Ö be integers such that Ò Ö Ô Ô and let Ñ Ô Ò Ô µ. Let be a - way branching program of length at most µò and size at most Ë and let Á µ. There is a set of embedded Ô-cylinder intersections with balanced, ordered footprints whose union covers a subset Á ¼ Á with Á ¼ Á such that each satisfies µ, Ñ µ Ñ and Æ µ ÑÔ ÐÓ Òѵ ËÖ Á Ò. Let be a leveled branching program on Ò of length Ò. Given Ä Ò and an input Ü Ò define Ö Ü Äµ Ò to be the set of indices of variables queried in on input Ü at levels in Ä and let ÙÒ Ò Ü Äµ Ò Ö Ü Äµ. Given a partition Ä Ä Ä Ôµ of Ò and an input Ü Ò, define ÒÓ Ü Ä Ä Ôµ to be the sequence of nodes of that Ü reaches at levels Ò such that Ä for some Ô for which Ä. LEMMA 3.8. Let be a leveled branching program of length Ò and Ä Ä Ôµ be a partition of Ò. Let ¼ Ô be a partition of Ò, ¼, and let Ú Ú Öµ be a set of nodes of. Define µ to be the set of all inputs Ü such that ÒÓ Ü Ä Ä Ôµ Ú Ú Öµ, ÙÒ Ò Ü Ä µ for all Ô, and Ü ¼. Then is an embedded Ô-cylinder intersection with footprint Ôµ. PROOF. Let Ò ¼ Ô Ò ¼ Ô and let be the embedded Ô-cylinder intersection defined by, Ô and Ô. Clearly,, and it suffices to show that. Let Þ. By definition of, for each Ô there is a Ý such that Þ Ò ¼ Ý Ò ¼. Furthermore Þ ¼ ݼ so Þ Ò Ý Ò for Ô. For any, on input Ý, levels of in Ä only access variables in Ò and Ý and Þ agree on those variables, so the outcomes of queries on Þ at levels in Ä are the same those as on input Ý, although it is not immediate that they both follow the same path since they a priori might arrive at different nodes in these levels to begin with. However, since Þ Ý Ý Ô all begin at the same start node in Ä we see that by induction on the length of the path that for each, Þ follows precisely the same path in Ä as Ý. This implies that ÒÓ Þ Ä Ä Ôµ Ú Ú Öµ and that ÙÒ Ò Þ Ä µ ÙÒ Ò Ý Ä µ for Ô and Þµ. Therefore Þ as required. The follow proposition will be useful for satisfying the ordered footprint condition. PROPOSITION 3.9. Let ¼ Ô ¼ Ò be disjoint sets. Then there exist sets ¼ for each Ô and a permutation Ô Ô such that µ µ Ôµ and Ô ¼ for each. PROOF. We prove this by induction on Ô. The statement is clearly true for Ô. Suppose it is true for Ô ¼. Let Ò be minimum such that there is some with ¼ Ô. ¼ Let ¼, µ, and define ¼¼ ¼ Ò for. By construction, ¼ Ò Ô µô ¼ for. We now apply the inductive hypothesis to the Ô sets ¼¼ for to define the remainder of and sets ¼¼ ¼ with ¼¼ Ô µ Ô ¼ for. Clearly Ô are disjoint. PROOF OF THEOREM 3.7. Assume that Ö divides Ò. By Lemma 3.2, if we choose Ä Ä Ô Ò by dividing the levels of into Ö blocks of height ÒÖ and randomly, uniformly, and independently include each block in one of the Ä, then for any Ü Ò, for each Ô, ÈÖÙÒ Ò Ü Ä µ Ô Ò Ô Ö Ôµ. Therefore, there exists a choice of this assignment such that for some Á ¼¼ Á with Á ¼¼ Á, for all inputs Ü Á ¼¼, ÙÒ Ò Ü Ä µ Ô Ò for all Ô. Fix this choice. For the choice of Ä Ä Ô and all Ü, there are Ö ¼ Ö elements in ÒÓ Ü Ä Ä Ôµ, at most one each from levels ÒÖ of, for Ö. For any input Ü Á ¼¼, we find disjoint ¼ Ô ¼ with each ¼ Ô Ò Ôµ Ñ ¼, ¼ ÙÒ Ò Ü Ä µ. We then apply Proposition 3.9 to find sets ¼ with Ñ Ñ ¼ Ô and a permutation Ô Ô such that µ µ Ôµ. Let ¼ Ò Ô. Using the construction given by Lemma 3.8, let be the embedded Ô-cylinder intersection containing Ü on which outputs 1 defined by ¼ Ô, node sequence Ú Ú Ö ¼µ ÒÓ Ü Ä Ä Ôµ, and Ü ¼. Finally, we define ¼ ¼ and µ for Ô to yield an ordered footprint Ôµ for. We count the total number of such embedded Ô-cylinder intersections over all elements in Á ¼¼. To specify one such cylinder intersection, it suffices to describe its feet, spine, and its associated node sequence. There are fewer than ËÖ choices of its node sequence, Ò ÔÑ Ò Ô choices of its spine, and at most Ñ ÔÀ ÑÒµÒ ÔÑ ÐÓ Òѵ choices of disjoint Ô each of size Ñ.

5 Thus in total we obtain ÔÑ ÐÓ Òѵ ËÖ Ò ÔÑ such cylinder intersections that partition a set containing Á ¼¼. Therefore, by Markov s inequality, the portion of Á ¼¼ covered by such intersections that have density Æ µ ÔÑ ÐÓ Òѵ ËÖ Á Ò is at most of Á ¼¼. Therefore there is an Á ¼ Á ¼¼ with Á ¼ Á covered by embedded Ô-cylinder intersections with feet of size Ñ and with density at least ÔÑ ÐÓ Òѵ ËÖ Á Ò. To use Theorem 3.7, we need a function with high Ô-party communication complexity when Ô. We use Ô-tensor analogues of the quadratic forms considered in [7, 2, 6], although our tensors do not generalize either the modified Sylvester or random Hankel matrices used in those bounds. We define our function TENS ÔØÕ in several steps. For simplicity, Let Õ Ò and Ò be distinct elements of field Õ. Over Õ define È vectors Ú Ú Ø Ò Õ by Ú Ò µ. Ø Æ Let Ì Ô Ú; that is, for Ý ÝÔ Ò Õ, Ì Ý Ý Ôµ where Ô Ø Ô Ø Ú Ý µ Ú Ú Ô ÔÒ Ø Ô Ô Ô Ý Thus we can identify Ì by the Ò Ô array of coefficients Ô. Then in analogy with the argument in [2] we modify Ì by setting most of the entries in this array to 0, defining Ä Ì µ Ý Ý Ôµ È ÔÒ Ô É Ô Ý. Finally, let be any linear map and define TENS ÔØÕ Ò by TENS ÔØÕ Üµ Ä Ì µ Ü Üµµ; for definiteness, we identify the elements of with the residues of polynomials in Þ modulo some irreducible degree polynomial and take µ ¼µ. THEOREM Let ¼ be arbitrary, let Ô be an integer with Ô µ ÐÓ Ò, let Ô Ô ÐÓ Ò, let Õ, Õ and let Ø Ò. then any -way branching program computing TENS ÔØÕ Ò ¼ in time Ì µòô ÐÓ Ò requires space Ë Ò ÐÓ. In order to derive our results for TENS ÔØÕ we must show that it is not constant on any embedded Ô-cylinder intersection with a balanced, ordered footprint that has large feet and density. For any disjoint Ô Ò, and a tensor Ì define Ì Ô to be the tensor on Õ Ô Õ given by the Ô È subarray of the array of Ì. Observe that for Ì, Ø Æ Ì Ô Ô Ù where Ù Úµ. LEMMA If is an embedded Ô-cylinder intersection with balanced, ordered footprint Ôµ on which TENS ÔØÕ is constant then there is an embedded Ô-cylinder intersection ¼ with the same footprint and Æ ¼ µ Æ µ Ô on which Æ Ì Ô is constant. PROOF. Let ¼ Ò Ô. Observe that, since tensors are multilinear, Ä Ì µ Ü Üµ equals Ô¼Ô Ä Ì µ Ô Ü Ü Ô µ On, note that Ü ¼. By construction of the map Ä and the fact that the footprint is ordered, for a permutation, Ä Ì µ µ Ôµ ¼ unless is the identity. Further, any term that has an index containing ¼ must not have an index for at least one of the for. For Ô, collect all terms in the sum that have indices for all but do not have index and call the function given by that sum. Then Ä Ì µ Ü Üµµ Ä Ì µ Ô Ü Ü Ô µ Ôµ Ì Ô Ü Ü Ô µµ µ Ôµ by the order condition on the footprint and the linearity of. Let Ô be the legs of. For let Let Þ Þµµ and for Ô Õ let Ô be the embedded Ô-cylinder intersection defined by and Ô Ô. Choose ¼ Ô for the Ôµ that maximizes Ô. Clearly ¼ Ô and Æ Æ Ô is constant on ¼. If is a footprint of an embedded Ô-cylinder intersection, then µ denotes the set of all embedded Ô-cylinder intersections whose footprint is ; we extend this to Ô-partitions of È by identifying È with a footprint with an empty spine. For any function Ò Õ ¼, we define its discrepancy with respect to by µ ÑÜ È Ö Üµ and Ü µ È Ö Üµ ¼ and Ü Following Raz [15], for Ñ Õ µ Ô define µ Ý Ý Ô µ Ý Ý Ôµ. We say that is additive in its -th component if for all Ô Ñ Õ, Ôµ Ôµ Ôµ The following proposition is implicit in [15] and extends ideas from [11, 12]. PROPOSITION 3.12 ([15]). Let È be the Ô-partition of ÑÔ into Ñ Ñ Ñ Ñ Ô µñ. If Ñ Õ µ Ô is additive on each component then È µ µ Ô. Since the function Ô Õ given by ÆÌ Ô is additive in each component, by Proposition 3.12 we need only bound µ to bound Ôµ µ. The key property we use in this analysis is that if Ø Ô, the defining vectors for Ì Ô have the property that Ù Ù Ø are linearly independent for each Ô. This is immediate from the following lemma. LEMMA Let Ò with Ø and for Ø. For Ú Ú Ø defined as above Ú µ Ú Øµ are linearly independent. PROOF. Suppose that «Ú µ «Ø Ú Øµ ¼. Define the polynomial Þµ ««Þ «ØÞ Ø. Then our assumption implies that µ ¼ for all. Therefore has at least Ø distinct roots and since it has degree at most Ø, it must be identically 0, implying that all the «are 0. We now analyze the behavior of Ì Ô, finding it convenient to do this analysis in a more general fashion by analyzing arbitrary tensors Ì with similar properties. È Ø «Æ Ô Ù LEMMA If Ì and ¼ Õ ¼ then ÈÖ Ý Ý Ô Ì Ý Ý Ôµ ÈÖ Ý Ý Ô Ì Ý Ý Ôµ ¼ and ÈÖ Ý Ý Ô Ì Ý Ý Ôµ ¼ Õ

6 PROOF. Fix Ô Ò Õ. Then Ì Ý Ôµ Ø «Ô Ù µµ Ù Ý µ Ú Ô Ý È Ø where Ú Ô «ÉÔ Ù µµù. If Ú Ô ¼ then ÈÖ Ý Ú Ô Ý ¼ ÈÖ Ý Ú Ô Ý ¼ and ÈÖ Ý Ú Ô Ý ¼. Otherwise, for any Õ, ÈÖ Ý Ú Ô Ý Õ. The lemma follows. È Ø Æ LEMMA Suppose that Ì Ô «Ù, that for each Ô, Ù Ù Ö are linearly independent over Õ, and that at least Û ¼ of the «are nonzero. Then È Ô ÈÖ Ý Ý Ô Ì Ý Ý Ôµ ¼ ÕµÛ Õ. PROOF. We prove this by induction on Ô for all such Ì. In the case Ô, Ì Ý µ Ø «Ù Ý µ Ø «Ù µ Ý È Ø Since «Ù ¼ by the linear independence of Ù ÙØ, ÈÖ Ý Ì Ý µ ¼ Õ as required. Now assume the statement for all Ô µ-tensors Ì ¼ of this form and all Û ¼ for Ô. For Ò Õ let Æ Ô µ Ù Ô ¼ and «¼. Let Ë Ô Ò Õ Æ Ô µ Û. ÈÖ Ì Ý ÝÝ Ý Ôµ ¼ ÈÖÝ Ô Ë Ô Ô Ý Ô Now for Ë Ô, Ì Ý Ý Ô µ ÑÜ ÈÖ Ì Ý ÝÔ Ôµ ¼ (1) ÔË Ô ÝÝ Ô Ø «Ù Ô µ Ç Ô Ù Ç Ô «¼ Ù where «¼ «Ù Ô µ and, since Ë Ô, «¼ ¼ for at least Û ¼ Û values of. Therefore we can apply the inductive hypothesis to the Ô µ-tensor Ì Ý Ý Ô µ and Û ¼ Û to bound the È second term in (1). The first term in (1) is bounded above by Û Û Õ Û Õ Û Õµ Û. Adding both bounds yields the desired result. COROLLARY If Ñ Ø then Æ Ì Ô µ È Ô ÕµØ Õµ ØÔ and Ôµ Æ Ì Ô µ Õµ ØÔ. PROOF OF THEOREM Let Ñ Ø, Ôµ ÐÓ Ò Ô Ñµµ µô ÐÓ Ò and observe that Ñ Ô Ò Ô µ Ô Ñ. Let Ö Ô Ô and observe that Ö ÒÔѵ ÐÓ Ò Ô Ñµµ Ò. Since TENS ÔØÕ µ Ò we apply Theorem 3.7 to any branching program computing TENS ÔØÕ to find an embedded Ô-cylinder intersection with balanced, ordered footprint Ôµ with Ñ µ Ñ and Æ µ ÑÔ ÐÓ Òѵ ËÖ on which TENS ÔØÕ is 1. Applying Lemma 3.11 we get an embedded Ô-cylinder intersection ¼ on the same footprint and Æ ¼ µ Æ µ Ô on which Æ Ì Ô is constant. By corollary 3.16 we must have ÑÔ ÐÓ Òѵ ËÖ Ô Õµ ÑÔ. Solving for Ë, plugging in the upper bound on Ö, and bounding tiny terms yields Ë Ñ Ô ÐÓ Õ Ô ÐÓ Òѵ ÒÔ ÐÓ Ò Ô Ñµµµ. Since ÐÓ Õ Ô Ô ÐÓ Ò, Ë Ñ ÐÓ Õµ ÒÔ Ô ÐÓ Ò Ô Ñµµµ Ò ÐÓ Õ since Ô µ ÐÓ Ò. Although we would like to derive improved bounds for the Boolean case as well, the prospects are not good for obtaining such bounds based on using multiple parties in the subtler argument for this case introduced by Ajtai [1, 2]. The key argument there uses a property that is true in the 2-party case but whose analogue is false in the Ô-party case for Ô. Furthermore, in this argument a distribution of layers is chosen so that most layers are not assigned to any party. Values read in these layers would become part of the spine of the embedded cylinder intersection and the main multiparty advantage, larger feet for these cylinder intersections, would evaporate. 4. NEAREST NEIGHBOR DATA STRUC- TURES AND TIME-SPACE TRADEOFFS The -near neighbor problem ÆÆ over a metric space defined on Í with metric, has as input a query Ü Í, a database Ý Ý Ò Í as well as a fixed real number Ò µ and accepts iff there is some Ý such that Ü Ýµ. The static data structure version of the problem allows arbitrary preprocessing based on,, and and allows one to store information in some number of cells of memory, each of limited size so that given the query Ü as input, one can compute ÆÆ Ü µ efficiently. Thus three natural complexity measures are the amount that can be stored in each memory cell, the number of memory cells, and the query time. The following is an extension of an observation of Miltersen, Nisan, Safra, and Wigderson [13] on the relationship between static data structure problems in the cell-probe model and time-space tradeoffs. THEOREM 4.1. Let É Ñ Ò Ç be a static data structure problem; i.e., given a query Ü Ñ and a database Ò, compute an output Ü µ Ç. If for every ¼ Ò there is a É-way branching program ¼ computing the function ¼ in time Ì and space Ë, then for any there is a static cell-probe data structure using Ë memory cells of É Ë ÐÓ Ñµ bits each to store any database so that the query time to solve is at most Ì. PROOF. The memory cells of the data structure will store the Ë nodes of the É-way branching program ¼. Each memory cell corresponding to a node Ú ¼ contains the names of the variables queried and pointers to the nodes reachable by each of the É paths of length starting at Ú in ¼. Thus cell-probe lower bounds require time-space tradeoff lower bounds. We prove a converse under somewhat more restrictive assumptions about the data structure. We assume that, given a query, a data structure algorithm initially reads some portion of the input query and then chooses a memory cell from which to read. At each subsequent step, based on the contents of the memory cell identified in the previous step, it reads some more from the input and determines which memory cell to read from next. Such an algorithm will always have enough storage to name a memory location in the data structure and it may have some additional work space. THEOREM 4.2. Let É Ñ Ò Ç be a static data structure problem. If there is a data structure having at most Ë cells of memory that reads at most consecutive components of the query in a single time step and solves using query time at most Ì and additional work space at most Ë, then for every ¼ Ò there is a É -way branching program running in time Ç Ì µ and space Ç Ë ÐÓ Ì µ that computes Ü ¼µ.

7 PROOF. A node in the branching program will correspond to a pair consisting of the name of a memory cell and a configuration of the additional work space of the data structure (including the program counter of the algorithm). With the database fixed to ¼, the contents of each memory cell are determined by its name. Given this configuration, the components of the query to be accessed in a single time-step are determined by the algorithm, the configuration of the additional work space and the memory cell just accessed from the previous step. The new values of these quantities are determined by the value of these query components. Thus we can obtain lower bounds for data structure problems for natural classes of such algorithms by proving time-space tradeoff lower bounds. In this section we do this for several -near neighbor problems. Although the memory cell size does not appear in the above statement explicitly, the bound on the number of query components that can be read in a single step is usually a function of the number that can fit in a single memory cell Near Neighbor Lower bounds in Large Spaces Given a metric ¼ on Í we can derive a metric È on Í as the composition of ¼, defining Ü Ýµ ¼ Ü Ýµ. Thus the usual Hamming metric on Í is just the composition of the inequality metric. Similarly if Í ¼ and ¼ is the Hamming metric on Í then its composition is just the Hamming metric on ¼. Call this the compositional Hamming metric on Í. For ¼, define to be with each bit complemented and ÓÙÐ µ ¼. For Ü Ü Ü µ ¼ µ define ÓÙРܵ ÓÙÐ Ü µ ÓÙÐ Ü µµ. Define ¼ µ to be the set of all vectors Ý for, ¼ where Ý is ÓÙÐ µ in its -th and -th coordinate and ¼ in all other coordinates. THEOREM 4.3. Let Í µ be the metric space where Í, and Ü Ýµ is the usual Hamming metric on Í, the number of coordinates on which Ü and Ý differ. Then there exists a database ¼ Í of size Ò µ and such that any Í-way branching program (or RAM algorithm) computing ÆÆ Ü Ô ¼µ on Í µ in time Ì and space Ë requires Ì Å ÐÓ Ëµ ÐÓ ÐÓ Ëµµ. PROOF. Assume without loss of generality that for some integer. In [6], it is shown that the element distinctness problem, µ ¼ Ô has a time-space tradeoff lower bound of the form Ì Å ÐÓ Ëµ ÐÓ ÐÓ Ëµµ. For suitable and ¼ we give a reduction from to ÆÆ Ü ¼µ. We identify elements of with elements of ¼ and elements of with elements of ¼. Set ¼ and. Observe that for all ¼, ÓÙÐ µ ¼. If ܵ then there is an and such that Ü Ü and it is easy to see that ÓÙРܵ ݵ. Furthermore, if ܵ ¼ then for all Ý, ÓÙРܵ ݵ. COROLLARY 4.4. Any data structure algorithm solving ÆÆ over Í µ where Í, and Ü Ýµ is the usual Hamming metric on Í, that uses ÒµÓ µ memory cells and at most Òµ Ó µ additional space and reads one component of the query per time step requires query time Å Ô ÐÓ ÐÓ ÐÓ µ. THEOREM 4.5. Let Í µ be the metric space where Í, and Ü Ýµ is the compositional Hamming metric on Í. Then there exists a database ¼ Í of size Ò µ and such that any Í-way branching program (or RAM algorithm) computing ÆÆ Ü ¼µ on Í µ in time Ì and space Ë requires Ì Å ÐÓ ÐÓ µëµµ. PROOF. Assume without loss of generality that is an integral power of 2. For, define the Hamming closeness problem ÀÅ ¼ µ ¼ to be one on input Ü Ü µ if and only if there is some such that the Hamming distance between Ü and Ü is at most. In [6], it is shown that the Hamming closeness problem, ÀÅ µ ¼ where À µµ and À µ ÐÓ µ ÐÓ µ has a time-space tradeoff lower bound of the form Ì Å ÐÓ ÐÓ µëµµ. We will consider ¼ for which À µµ. For a suitable ¼ and we give a reduction from ÀÅ to ÆÆ Ü ¼µ. We identify elements of with elements of ¼ and elements of with elements of ¼. Set ¼ and µ µ ÐÓ. Observe that for ¼, ¼ ÓÙÐ µ ¼ µ and that for any ¼ ¼ ÓÙÐ µ ÓÙÐ µµ ¼ ÓÙÐ µ ÓÙÐ µµ ¼ µ ¼ µµ ¼ µ by the triangle inequality. Therefore, if ÀŠܵ then there are and such that Ü, Ü, and ¼ µ. For this value observe that ÓÙРܵ Ý µ. If ÀŠܵ ¼ then by the above observations for every Ý, ÓÙРܵ Ý µ. COROLLARY 4.6. Any data structure algorithm solving ÆÆ over Í µ where Í, and Ü Ýµ is the compositional Hamming metric on Í, that uses ÒµÓ µ memory cells and at most Òµ Ó µ additional space and reads one component of the query per time step requires query time Å ÐÓ µ Near Neighbor Lower Bounds in ¼ Note that by setting ¼ ÐÓ ¼ and apply Corollary 4.6 with ¼ instead of, we obtain COROLLARY 4.7. Any data structure algorithm solving ÆÆ on the Hamming space over ¼ with ÒµÓ µ memory cells and Òµ Ó µ additional space that reads Ç ÐÓ µ Ç ÐÓ Òµ consecutive bits of the query per step requires Å µ query time. This lower bound is larger than those of [8, 5] by a ÐÓ Òµ factor. (Note that, in the communication game model used in those papers, a model even stronger and less reasonable than the cellprobe model, query time ÐÓ Òµ is optimal.) In this section we prove a query-time lower bound for the ¼ Hamming model that is a factor Å ÐÓ µ larger still but under the restriction that we can access only one bit of the query per step and that this corresponds to an oblivious rather than a general branching program. Note that this is incomparable with Corollary 4.7. THEOREM 4.8. Any data structure algorithm solving ÆÆ on the Hamming space over ¼ using ÒµÓ µ memory cells and Òµ Ó µ additional space that accesses one query bit per time step and in a fixed order requires query time Å ÐÓ µ.

8 The rest of this section is devoted to the proof of the time-space tradeoff lower bound for ÆÆ over ¼ on oblivious branching programs that implies this theorem. It will be useful to consider elements of ¼ as vectors from whose indices are from rather than. Furthermore, we assume that is a power of two, set ÐÓ, and view those indices as themselves elements of. Thus, if and Ú is a vector, then Ú will denote the -th coordinate of Ú and we extend this notation to the partial vectors Ú Â for subsets  defined as the projection of Ú on  in accordance with our usual notation. Given indices for coordinates of Ú, the expressions and are well-defined, taking place in rather than over the integers. We will use ¼ and to denote the all 0 s and all 1 s vectors respectively. DEFINITION 4.1. Let Î Æ Õ be a vector space, Ú Î, and Ë Î. Define ÞÖÓ Úµ Æ Ú ¼, let ÓÒ Úµ Ìdenote the complement of ÞÖÓ Úµ, and define ÞÖÓ Ëµ Ë ÞÖÓ µ. Throughout this section, will denote the usual Hamming metric on. For any subset of the coordinates, Ë, we extend this metric to pairs of partial vectors Ù Ë Ú Ë defined on Ë in the obvious way and define Ë Ù Úµ Ù Ë Ú Ëµ. Finally, for Ù Ú, let Ù Ú denote the bitwise AND of Ù and Ú so that Ù Úµ Ù Ú for all. Notice that Ù Ú Ù Ûµ ÓÒ Ùµ Ú Ûµ for any Ù Ú Û. We create a database ¼ with µ elements over for which computing ÆÆ ¼ ܵ ÆÆ Ü ¼µ with has a large time-space tradeoff lower bound on oblivious branching programs. We will derive the lower bound using Theorem 3.3 with Ô and a reduction of the 2-party fixed-partition communication problem ÉÍÄÁÌ to the 2-party best-partition communication problem ÆÆ ¼ for a suitable. The database ¼ is based on extensions of Hadamard codes. Given Ú and «, define by Ú «µ Ú «for each. Define the database ¼ Ù «¼µ Ú «µ «¼ «Ù Ú Ù Ú The elements of our database are based on pairs of outputs of. If we only needed to prove lower bounds with respect to the 2-party fixed-partition as opposed to the best-partition model then a simpler construction based on the following Lemma 4.9 would suffice. It shows that, if a set of coordinates Ë has a natural pairing and a vector is far away on Ë from every member of a family of outputs of, then the values of on those paired coordinates satisfy ÉÍÄÁÌ or its dual. Lemma 4.10 shows that whenever satisfies these properties then a simple encoding of is in fact far from all vectors in ¼, and conversely. This is the key property that allows us to prove the reduction from ÉÍÄÁÌ even in the best-partition model. LEMMA 4.9. Let ¼ and. Let Ë be such that for all Ë, Ë. Suppose that for all Ú «µ with Ú, Ë Ú «µ µ Ë. 1. If ¼, then for all Ë. 2. If, then for all Ë. LEMMA Let. Let Ù «µ, with Ù ¼. Let ¼. Then Ù «µ µ Þµ for all Þ ¼ if and only if 1. ÓÒ Ù«µµ ¼µ and 2. ÓÒ Ù«µµ Ú µµ for all Ú µ such that Ù Ú. We give the proofs of these lemmas after some preliminaries. Given a vector space Ï with subspace Í and Ù Ï, the set Ù Í Û Ï Û Ù Ù ¼ for some Ù ¼ Í is called an affine subspace of Ï, or simply an affine space; Í is called the underlying space of Ù Í. If Î is an affine subspace, we denote its underlying space by Î. Observe that PROPERTY (1) if Î Î ¼ are affine spaces then Î Î ¼ is an affine space and Î Î ¼ Î Î ¼ unless Î Î ¼. (2) If Î Î ¼, then Î Î ¼ Î Î ¼. We use two key properties of. PROPERTY (1) is linear and (2) For any Ú «µ, the sets ÞÖÓ Ú «µµ and ÓÒ Ú «µµ are affine subspaces of. We rely on the linearity of for our reduction from ÉÍÄÁÌ. Since some of our properties apply to all linear functions over larger fields we state them in this form. Let Õ be the field of Õ elements, and let Ò ¼, ¼ be integers such that Ò ¼ Õ ¼. We will think of the indices of vectors from Ò¼ Õ as elements of ¼ Õ. LEMMA Let Î be a vector space, and suppose Î Ò¼ Õ is a linear function. È Let Ü Ò¼ Õ, let Ù Ú Î, and let Ë ÓÒ Úµµ. Then «Õ Ë Ü Ù «Úµµ Õ µë. PROOF. Let Ë ÓÒ Úµµ. Clearly, there is exactly one «Õ such that Ü Ù µ «Ú µ since Ú µ ¼. So È «Õ Ü Ù «Úµµ Õ µ. Our claim follows. LEMMA Let Î be an affine subspace of ¼¼ Õ with underlying space Î. Let ¼¼ Õ Ò¼ Õ be a linear function. Let Ë ¼ Õ, Ì Î, and Ü Ò ¼ Õ. If for all Ú Î, Ë Úµ ܵ Õ Õ µë, then ËÞÖÓ Ì µµ Úµ ܵ µë ÞÖÓ Ì µµ for all Ú Î. PROOF. Suppose Ë Úµ ܵ Õ µë, and let Ì be any subset of Î. We will show by induction on the size of Ì that ËÞÖÓ Ì µµ Úµ ܵ Õ µë ÞÖÓ Ì µµ. The base case is trivial, so suppose that for all Ú Î, Ë ¼ Úµ ܵ Õ µë¼ for Ë ¼ Ë ÞÖÓ Ì ¼ µµ and Ì ¼ a subset of Ì with Ì Ì ¼. Let Ù Ì Ì ¼. Notice that for «Õ that Ú «Ù Î since Ù Î. Hence, for any «Õ, Ë ¼ Ú «Ùµ ܵ Õ µë¼. Further, since Ú «Ùµ and Úµ agree on all coordinates in ÞÖÓ Ùµµ, Ë ¼ ÞÖÓ Ùµµ Úµ ܵ Ë ¼ ÓÒ Ùµµ Ú «Ùµ ܵ Õ µë¼ Summing the above inequalities over all «Õ and applying Lemma 4.13, we see that Ë ¼ ÞÖÓ Ùµµ Úµ ܵ Õ µë¼ ÞÖÓ Ùµµ Since Ë ¼ ÞÖÓ Ùµµ Ë ÞÖÓ Ì µµ, this proves our claim. We can now apply this Õ to derive Lemma 4.9 as a corollary.

9 PROOF OF LEMMA 4.9. Fix Ë. Let Í ¼ Ú «µ Ú «¼ and let Î Ú «µ Ú. Clearly, Í ¼ Î ¼ Î. Furthermore, ÞÖÓ Í ¼ Î ¼ µµ Ë. Applying Lemma 4.14 with Ì Í ¼ Î ¼ we see that Ú «µ µ for all Ú «µ Î. (i) If ¼, then ¼ ¼µ ¼ µ Î and so ¼ µ and µ which together imply that. (ii) If, then choose a Ù such that Ù. We see that the -th coordinate and the µ-th coordinate of Ù «µ must differ for any «. Further, Ù «µ Ù «µ. Hence, and thus. ¼ µ µ ¼µ µ In order to prove Lemma 4.10, we use the following technical lemma based on the fact that Hadamard codewords are characteristic vectors of affine subspaces. LEMMA Let Þ ¼ with Þ ¼. Let Ù «µ with Ù ¼. If ÓÒ Ù «µµ ÓÒ Þµ, then there exists a Ú µ such that Þ Ù «µ Ú µ. PROOF. Since Þ ¼, there are ¼µ Ø µ with Ø such that Þ ¼µ Ø µ. For convenience, let Í ÓÒ Ù «µµ and let Î ÓÒ Þµ ÓÒ ¼µµ ÓÒ Ø µµ. Note that both Í and Î are affine spaces. Since Þ ¼, Î. Hence, Î ¼ Ø, so that Î ¼ Ø Ø. Similarly, we see that Í ¼ Ù. By hypothesis, Í Î, so Í Î and thus Í Î. Hence, Ù ¼ Ø Ø. Since Ù ¼ by assumption, we have three cases to consider. (i) If Ù, then Þ Ù ¼µ Ø µ with Ù Ø. (ii) If Ù Ø, then Þ ¼µ Ù µ with Ù. (iii) If Ù Ø, then note that Þ ¼µ Ø µ ¼µ Ø µµ Ø µ Ù ¼ µ Ø µ by linearity of. Furthermore, Ù Ø Øµ Ø Ø Ø Ø. So in all cases, Þ Ù «¼ µ Ú µ for some «¼ and Ú µ with Ù Ú. If «¼ «, we are done. Otherwise, «¼ «, so that ÓÒ Þµ ÓÒ Ú «µµ ÞÖÓ Ú «µµ. But ÓÒ Þµ ÓÒ Ú «µµ by assumption. So ÓÒ Þµ. That is, Þ ¼, a contradiction. PROOF OF LEMMA For the first direction, suppose that Ù «µ µ Þµ for all Þ ¼. Then in particular for any Ú µ with Ù Ú, Ù «µ µ Ù «µ Ú µµµ. This implies that ÓÒ Ù«µµ Ú µµ for all such Ú µ. Furthermore, ÓÒ Ù«µµ ¼µ Ù «µ µ ¼µ since ¼ ¼. This proves the first direction. For the other direction, suppose that Ù «µ µ Þµ for some Þ ¼. If Þ ¼, then ÓÒ Ù«µµ ¼µ Ù «µ µ ¼µ. So we may assume that Þ ¼. We have two cases to consider: (i) If ÓÒ Ù «µµ ÓÒ Þµ, then by Lemma 4.15, Þ Ù «µ Ú µ for some Ú µ such that Ù Ú. Hence, ÓÒ Ù«µµ Ú µµ Ù «µ µ Ù «µ Ú µµµ Ù «µ Þµ (ii) If ÓÒ Ù «µµ ÓÒ Þµ, then ÓÒ Ù «µµ ÓÒ Þµ ÞÖÓ Ù «µµ ÓÒ Þµ by Property 4.11 since ÓÒ Ù «µµ and ÓÒ Þµ are both affine spaces. That is, ÓÒ Ù«µµ ¼ Þµ ÞÖÓ Ù«µµ ¼ Þµ. Hence, ÓÒ Ù«µµ ¼µ ÓÒ Ù«µµ Þµ ÓÒ Ù«µµ ¼ Þµ ÓÒ Ù«µµ Þµ ÞÖÓ Ù«µµ ¼ Þµ This completes the proof. Ù «µ µ Þµ In order to define the encoding of the input to ÉÍÄÁÌ we use the following simple extension of the usual arguments for best-partition communication complexity with additional twists to handle our particular needs. LEMMA Let with Ñ and. Then there exist ¼, «, and sets ¼ ÓÒ «µµ and ¼ ÓÒ «µµ such that ¼ ¼ and ¼ ¼ Ñ µ. PROOF. We first find an appropriate. Notice that Ù Ù Ù µ µ µ Ñ where the first inequality stems from the fact that for every, there are at least values for Ù such that Ù. Hence, by the pigeonhole principle, there is a such that µ Ñ Ñµ Ñ for Ñ. Also, ¼ since. Let ¼¼ µ, and let ¼¼ ¼¼. Notice that ¼¼ and ¼¼. Since ÓÒ ¼µµ and ÓÒ µµ are disjoint and together cover all of, there is an «such that ¼¼ ÓÒ «µµ ¼¼ Ñ µ. Choose ¼ so that ¼ ¼¼ ÓÒ «µµ and ¼ Ñ µ. Let ¼ ¼. Observe that since, trivially, ¼ we have ÓÒ «µµ ÓÒ «µµ. Therefore, ¼ ¼ ¼¼ µ ÓÒ «µµµ This completes the proof. ¼¼ ÓÒ «µµ Given as in Lemma 4.16, fix some ¼, ¼, and «guaranteed by the lemma. Write ¼ as ¼ Ñ µ. Since ¼ ¼, we may also write ¼ as ¼ Ñ µ with for all Ñ µ. For Ü Ý ¼ Ñ µ, define a vector Ü Ýµ as follows: If ¼, define Ü Ýµ to be the vector which is ¼ outside ÓÒ «µµ ¼ ¼, and has Ü in location, Ý in location for Ñ µ, and elsewhere. If, define Ü Ýµ to be the vector which is ¼ outside ¼ ¼, and has Ü in location and Ý in location for Ñ µ. Observe that for ¼, Ü Ýµ Ü Ýµ for all if and only if Ü Ý. Further, we note that for, Ü Ýµ Ü Ýµ for all if and only if Ü Ý. Also observe that Ü Ýµ ¼µ whenever Ü Ý. Notice that Ü Ýµ is ¼ for all coordinates in ÓÒ «µµ. Also notice that given the coordinate sets and, it is possible to compute the value of Ü Ýµ on Ñ

10 all coordinates of given only the value of Ü. Similarly, we can compute the value of Ü Ýµ on all coordinates of given the value of Ý. LEMMA Given disjoint with Ñ, and Ü Ý ¼ Ñ µ, let and Ü Ýµ be defined as above. Then Ü Ý if and only if for all Ú µ such that Ú, ÓÒ «µµ Ü Ýµ Ú µµ. PROOF. Either ¼ or. (i) Suppose ¼. To prove one direction, suppose that ÓÒ «µµ Ü Ýµ Ú µµ for all Ú µ such that Ú. Note that Ú if and only if Ú since ¼. So we may apply Lemma 4.9 with Ë ÓÒ «µµ to see that Ü Ýµ Ü Ýµ for all ÓÒ «µµ. That is, Ü Ý as required. Now consider the other direction. Suppose that Ü Ý. Then Ü Ýµ Ü Ýµ for all ÓÒ «µµ. For any Ú µ with Ú, we see that Ú µ Ú µ for all. Hence, Ü Ýµ Ú µµ for any ÓÒ «µµ. Thus, ÓÒ «µµ Ü Ýµ Ú µµ ÓÒ «µµ (ii) The case when follows analogously. THEOREM Let,, Æ, and Æ ¼ Æ with Æ ¼ Ñ. Then over ¼ Æ, there is a partial assignment, ¼ Ƽ, such that Ø ÆÆ ¼ µ Ñ µ PROOF. We do this by reduction from ÉÍÄÁÌ Ñ. Let be disjoint subsets of Æ Æ ¼ with Ñ. Given Ü Ý ¼ Ñ µ, define Ü Ýµ based on and as above. By Lemma 4.17, Ü Ý if and only if ÓÒ «µµ Ü Ýµ Ú µµ for all Ú µ such that Ú. For Ü Ý, ÓÒ «µµ Ü Ýµ ¼µ, so we may apply Lemma 4.10 to find that Ü Ý if and only if for all Þ ¼, Ü Ýµ Þµ Set to be the partial assignment that is equal to Ü Ýµ for coordinates in, and unassigned elsewhere. Note that is constant with respect to Ü and Ý. Then given a two-party bestpartition communication complexity protocol solving ÆÆ ¼, we can construct a two-party communication complexity protocol solving ÉÍÄÁÌ on Ñ bits. Hence, Ø ÆÆ ¼ µ ÉÍÄÁÌ Ñ µ Ñ µ. THEOREM If an oblivious branching program with time Ì and space Ë solves ÆÆ ¼ over ¼, then Ì Å ÐÓ Ëµµ. PROOF. Let Ì. From Theorem 3.3 with Ô, there is an Æ ¼ of size Æ ¼ along with a partial assignment, on Æ ¼ such that Ø ÆÆ ¼ µ ÐÓ Ï and from Corollary 4.18 with Ñ, we have Ø ÆÆ ¼ µ. Combining this, we obtain ÐÓ Ï. Since Ë ÐÓ Ï we obtain ¼ ÐÓ Ëµ for some constant ¼ ¼ and thus Ì ¼ ÐÓ Ëµ. Acknowledgements Thanks to Yuval Rabani for suggesting the investigation of timespace tradeoffs for the nearest neighbor problem and to Matt Cary for helpful discussions. 5. REFERENCES [1] M. Ajtai. Determinism versus non-determinism for linear time RAMs with memory restrictions. In Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, pages , Atlanta, GA, May [2] M. Ajtai. A non-linear time lower bound for boolean branching programs. In Proceedings 40th Annual Symposium on Foundations of Computer Science, pages 60 70, New York,NY, October IEEE. [3] Noga Alon and Wolfgang Maass. Meanders and their applications in lower bounds arguments. Journal of Computer and System Sciences, 37: , [4] László Babai, Noam Nisan, and Márió Szegedy. Multiparty protocols, pseudorandom generators for logspace, and time-space trade-offs. Journal of Computer and System Sciences, 45(2): , October [5] O. Barkol and Y. Rabani. Tighter lower bounds for nearest neighbor search and related problems in the cell probe model. In Proceedings of the Thirty-Second Annual ACM Symposium on Theory of Computing, pages , Portland,OR, May [6] Paul Beame, Michael Saks, Xiaodong Sun, and Erik Vee. Super-linear time-space tradeoff lower bounds for randomized computation. In Proceedings 41st Annual Symposium on Foundations of Computer Science, pages , Redondo Beach, CA, November IEEE. [7] Paul W. Beame, Michael Saks, and Jayram S. Thathachar. Time-space tradeoffs for branching programs. In Proceedings 39th Annual Symposium on Foundations of Computer Science, pages , Palo Alto, CA, November IEEE. [8] A. Borodin, R. Ostrovsky, and Y. Rabani. Lower bounds for high dimensional nearest neight search and related problems. In Proceedings of the Thirty-First Annual ACM Symposium on Theory of Computing, pages , Atlanta, GA, May [9] Allan Borodin, A. A. Razborov, and Roman Smolensky. On lower bounds for read- times branching programs. Computational Complexity, 3:1 18, October [10] Ashok K. Chandra, Merrick L. Furst, and Richard J. Lipton. Multi-party protocols. In Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing, pages 94 99, Boston, MA, April [11] F. R. K. Chung. Quasi-random classes of hypergraphs. Random Structures and Algorithms, 1(4): , [12] F. R. K. Chung and P. Tetali. Communication complexity and quasi-randomness. SIAM Journal on Discrete Mathematics, 6(1): , [13] P. B. Miltersen, N. Nisan, S. Safra, and A. Wigderson. On data structures and asymmetric communication complexity. Journal of Computer and System Sciences, 57(1):37 49, [14] Nicholas J. Pippenger. On simultaneous resource bounds. In 20th Annual Symposium on Foundations of Computer Science, pages , San Juan, Puerto Rico, October IEEE. [15] R. Raz. The BNS-Chung criterion for multi-party communication complexity. Computational Complexity, 9: , [16] Ingo Wegener. Branching Programs and Binary Decision Diagrams: Theory and Applications. Society for Industrial and Applied Mathematics, 2000.

Time-space tradeoff lower bounds for randomized computation of decision problems

Time-space tradeoff lower bounds for randomized computation of decision problems Time-space tradeoff lower bounds for randomized computation of decision problems Paul Beame Ý Computer Science and Engineering University of Washington Seattle, WA 98195-2350 beame@cs.washington.edu Xiaodong

More information

Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms

Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms Roni Khardon Tufts University Medford, MA 02155 roni@eecs.tufts.edu Dan Roth University of Illinois Urbana, IL 61801 danr@cs.uiuc.edu

More information

Designing Networks Incrementally

Designing Networks Incrementally Designing Networks Incrementally Adam Meyerson Kamesh Munagala Ý Serge Plotkin Þ Abstract We consider the problem of incrementally designing a network to route demand to a single sink on an underlying

More information

Scan Scheduling Specification and Analysis

Scan Scheduling Specification and Analysis Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract

More information

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È.

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. Let Ò Ô Õ. Pick ¾ ½ ³ Òµ ½ so, that ³ Òµµ ½. Let ½ ÑÓ ³ Òµµ. Public key: Ò µ. Secret key Ò µ.

More information

SFU CMPT Lecture: Week 8

SFU CMPT Lecture: Week 8 SFU CMPT-307 2008-2 1 Lecture: Week 8 SFU CMPT-307 2008-2 Lecture: Week 8 Ján Maňuch E-mail: jmanuch@sfu.ca Lecture on June 24, 2008, 5.30pm-8.20pm SFU CMPT-307 2008-2 2 Lecture: Week 8 Universal hashing

More information

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È.

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. Let Ò Ô Õ. Pick ¾ ½ ³ Òµ ½ so, that ³ Òµµ ½. Let ½ ÑÓ ³ Òµµ. Public key: Ò µ. Secret key Ò µ.

More information

Optimal Time Bounds for Approximate Clustering

Optimal Time Bounds for Approximate Clustering Optimal Time Bounds for Approximate Clustering Ramgopal R. Mettu C. Greg Plaxton Department of Computer Science University of Texas at Austin Austin, TX 78712, U.S.A. ramgopal, plaxton@cs.utexas.edu Abstract

More information

The Online Median Problem

The Online Median Problem The Online Median Problem Ramgopal R. Mettu C. Greg Plaxton November 1999 Abstract We introduce a natural variant of the (metric uncapacitated) -median problem that we call the online median problem. Whereas

More information

From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols

From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols Christian Scheideler Ý Berthold Vöcking Þ Abstract We investigate how static store-and-forward routing algorithms

More information

Probabilistic analysis of algorithms: What s it good for?

Probabilistic analysis of algorithms: What s it good for? Probabilistic analysis of algorithms: What s it good for? Conrado Martínez Univ. Politècnica de Catalunya, Spain February 2008 The goal Given some algorithm taking inputs from some set Á, we would like

More information

Online Facility Location

Online Facility Location Online Facility Location Adam Meyerson Abstract We consider the online variant of facility location, in which demand points arrive one at a time and we must maintain a set of facilities to service these

More information

A Modality for Recursion

A Modality for Recursion A Modality for Recursion (Technical Report) March 31, 2001 Hiroshi Nakano Ryukoku University, Japan nakano@mathryukokuacjp Abstract We propose a modal logic that enables us to handle self-referential formulae,

More information

A sharp threshold in proof complexity yields lower bounds for satisfiability search

A sharp threshold in proof complexity yields lower bounds for satisfiability search A sharp threshold in proof complexity yields lower bounds for satisfiability search Dimitris Achlioptas Microsoft Research One Microsoft Way Redmond, WA 98052 optas@microsoft.com Michael Molloy Ý Department

More information

On the Performance of Greedy Algorithms in Packet Buffering

On the Performance of Greedy Algorithms in Packet Buffering On the Performance of Greedy Algorithms in Packet Buffering Susanne Albers Ý Markus Schmidt Þ Abstract We study a basic buffer management problem that arises in network switches. Consider input ports,

More information

Electronic Colloquium on Computational Complexity, Report No. 18 (1998)

Electronic Colloquium on Computational Complexity, Report No. 18 (1998) Electronic Colloquium on Computational Complexity, Report No. 18 (1998 Randomness and Nondeterminism are Incomparable for Read-Once Branching Programs Martin Sauerhoff FB Informatik, LS II, Univ. Dortmund,

More information

SFU CMPT Lecture: Week 9

SFU CMPT Lecture: Week 9 SFU CMPT-307 2008-2 1 Lecture: Week 9 SFU CMPT-307 2008-2 Lecture: Week 9 Ján Maňuch E-mail: jmanuch@sfu.ca Lecture on July 8, 2008, 5.30pm-8.20pm SFU CMPT-307 2008-2 2 Lecture: Week 9 Binary search trees

More information

Optimal Static Range Reporting in One Dimension

Optimal Static Range Reporting in One Dimension of Optimal Static Range Reporting in One Dimension Stephen Alstrup Gerth Stølting Brodal Theis Rauhe ITU Technical Report Series 2000-3 ISSN 1600 6100 November 2000 Copyright c 2000, Stephen Alstrup Gerth

More information

Exponentiated Gradient Algorithms for Large-margin Structured Classification

Exponentiated Gradient Algorithms for Large-margin Structured Classification Exponentiated Gradient Algorithms for Large-margin Structured Classification Peter L. Bartlett U.C.Berkeley bartlett@stat.berkeley.edu Ben Taskar Stanford University btaskar@cs.stanford.edu Michael Collins

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

A General Greedy Approximation Algorithm with Applications

A General Greedy Approximation Algorithm with Applications A General Greedy Approximation Algorithm with Applications Tong Zhang IBM T.J. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract Greedy approximation algorithms have been

More information

Treewidth and graph minors

Treewidth and graph minors Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under

More information

Computing optimal linear layouts of trees in linear time

Computing optimal linear layouts of trees in linear time Computing optimal linear layouts of trees in linear time Konstantin Skodinis University of Passau, 94030 Passau, Germany, e-mail: skodinis@fmi.uni-passau.de Abstract. We present a linear time algorithm

More information

Parameterized graph separation problems

Parameterized graph separation problems Parameterized graph separation problems Dániel Marx Department of Computer Science and Information Theory, Budapest University of Technology and Economics Budapest, H-1521, Hungary, dmarx@cs.bme.hu Abstract.

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Pebble Sets in Convex Polygons

Pebble Sets in Convex Polygons 2 1 Pebble Sets in Convex Polygons Kevin Iga, Randall Maddox June 15, 2005 Abstract Lukács and András posed the problem of showing the existence of a set of n 2 points in the interior of a convex n-gon

More information

Correlation Clustering

Correlation Clustering Correlation Clustering Nikhil Bansal Avrim Blum Shuchi Chawla Abstract We consider the following clustering problem: we have a complete graph on Ò vertices (items), where each edge Ù Úµ is labeled either

More information

Expander-based Constructions of Efficiently Decodable Codes

Expander-based Constructions of Efficiently Decodable Codes Expander-based Constructions of Efficiently Decodable Codes (Extended Abstract) Venkatesan Guruswami Piotr Indyk Abstract We present several novel constructions of codes which share the common thread of

More information

Information-Theoretic Private Information Retrieval: A Unified Construction (Extended Abstract)

Information-Theoretic Private Information Retrieval: A Unified Construction (Extended Abstract) Information-Theoretic Private Information Retrieval: A Unified Construction (Extended Abstract) Amos Beimel ½ and Yuval Ishai ¾ ¾ ½ Ben-Gurion University, Israel. beimel@cs.bgu.ac.il. DIMACS and AT&T Labs

More information

The strong chromatic number of a graph

The strong chromatic number of a graph The strong chromatic number of a graph Noga Alon Abstract It is shown that there is an absolute constant c with the following property: For any two graphs G 1 = (V, E 1 ) and G 2 = (V, E 2 ) on the same

More information

9.5 Equivalence Relations

9.5 Equivalence Relations 9.5 Equivalence Relations You know from your early study of fractions that each fraction has many equivalent forms. For example, 2, 2 4, 3 6, 2, 3 6, 5 30,... are all different ways to represent the same

More information

Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling

Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling Weizhen Mao Department of Computer Science The College of William and Mary Williamsburg, VA 23187-8795 USA wm@cs.wm.edu

More information

Simultaneous Optimization for Concave Costs: Single Sink Aggregation or Single Source Buy-at-Bulk

Simultaneous Optimization for Concave Costs: Single Sink Aggregation or Single Source Buy-at-Bulk Simultaneous Optimization for Concave Costs: Single Sink Aggregation or Single Source Buy-at-Bulk Ashish Goel Ý Stanford University Deborah Estrin Þ University of California, Los Angeles Abstract We consider

More information

Connected Components of Underlying Graphs of Halving Lines

Connected Components of Underlying Graphs of Halving Lines arxiv:1304.5658v1 [math.co] 20 Apr 2013 Connected Components of Underlying Graphs of Halving Lines Tanya Khovanova MIT November 5, 2018 Abstract Dai Yang MIT In this paper we discuss the connected components

More information

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph.

Trees. 3. (Minimally Connected) G is connected and deleting any of its edges gives rise to a disconnected graph. Trees 1 Introduction Trees are very special kind of (undirected) graphs. Formally speaking, a tree is a connected graph that is acyclic. 1 This definition has some drawbacks: given a graph it is not trivial

More information

A 2-Approximation Algorithm for the Soft-Capacitated Facility Location Problem

A 2-Approximation Algorithm for the Soft-Capacitated Facility Location Problem A 2-Approximation Algorithm for the Soft-Capacitated Facility Location Problem Mohammad Mahdian Yinyu Ye Ý Jiawei Zhang Þ Abstract This paper is divided into two parts. In the first part of this paper,

More information

Superconcentrators of depth 2 and 3; odd levels help (rarely)

Superconcentrators of depth 2 and 3; odd levels help (rarely) Superconcentrators of depth 2 and 3; odd levels help (rarely) Noga Alon Bellcore, Morristown, NJ, 07960, USA and Department of Mathematics Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv

More information

Monotonicity testing over general poset domains

Monotonicity testing over general poset domains Monotonicity testing over general poset domains [Extended Abstract] Eldar Fischer Technion Haifa, Israel eldar@cs.technion.ac.il Sofya Raskhodnikova Ý LCS, MIT Cambridge, MA 02139 sofya@mit.edu Eric Lehman

More information

Monotone Paths in Geometric Triangulations

Monotone Paths in Geometric Triangulations Monotone Paths in Geometric Triangulations Adrian Dumitrescu Ritankar Mandal Csaba D. Tóth November 19, 2017 Abstract (I) We prove that the (maximum) number of monotone paths in a geometric triangulation

More information

Maximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube

Maximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube Maximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube Kavish Gandhi April 4, 2015 Abstract A geodesic in the hypercube is the shortest possible path between two vertices. Leader and Long

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings On the Relationships between Zero Forcing Numbers and Certain Graph Coverings Fatemeh Alinaghipour Taklimi, Shaun Fallat 1,, Karen Meagher 2 Department of Mathematics and Statistics, University of Regina,

More information

The Structure of Bull-Free Perfect Graphs

The Structure of Bull-Free Perfect Graphs The Structure of Bull-Free Perfect Graphs Maria Chudnovsky and Irena Penev Columbia University, New York, NY 10027 USA May 18, 2012 Abstract The bull is a graph consisting of a triangle and two vertex-disjoint

More information

Planar graphs, negative weight edges, shortest paths, and near linear time

Planar graphs, negative weight edges, shortest paths, and near linear time Planar graphs, negative weight edges, shortest paths, and near linear time Jittat Fakcharoenphol Satish Rao Ý Abstract In this paper, we present an Ç Ò ÐÓ Òµ time algorithm for finding shortest paths in

More information

FOUR EDGE-INDEPENDENT SPANNING TREES 1

FOUR EDGE-INDEPENDENT SPANNING TREES 1 FOUR EDGE-INDEPENDENT SPANNING TREES 1 Alexander Hoyer and Robin Thomas School of Mathematics Georgia Institute of Technology Atlanta, Georgia 30332-0160, USA ABSTRACT We prove an ear-decomposition theorem

More information

A New Algorithm for the Reconstruction of Near-Perfect Binary Phylogenetic Trees

A New Algorithm for the Reconstruction of Near-Perfect Binary Phylogenetic Trees A New Algorithm for the Reconstruction of Near-Perfect Binary Phylogenetic Trees Kedar Dhamdhere ½ ¾, Srinath Sridhar ½ ¾, Guy E. Blelloch ¾, Eran Halperin R. Ravi and Russell Schwartz March 17, 2005 CMU-CS-05-119

More information

On Covering a Graph Optimally with Induced Subgraphs

On Covering a Graph Optimally with Induced Subgraphs On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number

More information

Winning Positions in Simplicial Nim

Winning Positions in Simplicial Nim Winning Positions in Simplicial Nim David Horrocks Department of Mathematics and Statistics University of Prince Edward Island Charlottetown, Prince Edward Island, Canada, C1A 4P3 dhorrocks@upei.ca Submitted:

More information

On the Number of Tilings of a Square by Rectangles

On the Number of Tilings of a Square by Rectangles University of Tennessee, Knoxville Trace: Tennessee Research and Creative Exchange University of Tennessee Honors Thesis Projects University of Tennessee Honors Program 5-2012 On the Number of Tilings

More information

An Optimal Algorithm for the Euclidean Bottleneck Full Steiner Tree Problem

An Optimal Algorithm for the Euclidean Bottleneck Full Steiner Tree Problem An Optimal Algorithm for the Euclidean Bottleneck Full Steiner Tree Problem Ahmad Biniaz Anil Maheshwari Michiel Smid September 30, 2013 Abstract Let P and S be two disjoint sets of n and m points in the

More information

Lower Bounds for Cutting Planes (Extended Abstract)

Lower Bounds for Cutting Planes (Extended Abstract) Lower Bounds for Cutting Planes (Extended Abstract) Yuval Filmus, Toniann Pitassi University of Toronto May 15, 2011 Abstract Cutting Planes (CP) is a refutation propositional proof system whose lines

More information

Lecture Notes on GRAPH THEORY Tero Harju

Lecture Notes on GRAPH THEORY Tero Harju Lecture Notes on GRAPH THEORY Tero Harju Department of Mathematics University of Turku FIN-20014 Turku, Finland e-mail: harju@utu.fi 2007 Contents 1 Introduction 2 1.1 Graphs and their plane figures.........................................

More information

Response Time Analysis of Asynchronous Real-Time Systems

Response Time Analysis of Asynchronous Real-Time Systems Response Time Analysis of Asynchronous Real-Time Systems Guillem Bernat Real-Time Systems Research Group Department of Computer Science University of York York, YO10 5DD, UK Technical Report: YCS-2002-340

More information

A Note on Karr s Algorithm

A Note on Karr s Algorithm A Note on Karr s Algorithm Markus Müller-Olm ½ and Helmut Seidl ¾ ½ FernUniversität Hagen, FB Informatik, LG PI 5, Universitätsstr. 1, 58097 Hagen, Germany mmo@ls5.informatik.uni-dortmund.de ¾ TU München,

More information

Tolls for heterogeneous selfish users in multicommodity networks and generalized congestion games

Tolls for heterogeneous selfish users in multicommodity networks and generalized congestion games Tolls for heterogeneous selfish users in multicommodity networks and generalized congestion games Lisa Fleischer Kamal Jain Mohammad Mahdian Abstract We prove the existence of tolls to induce multicommodity,

More information

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes

On the Complexity of the Policy Improvement Algorithm. for Markov Decision Processes On the Complexity of the Policy Improvement Algorithm for Markov Decision Processes Mary Melekopoglou Anne Condon Computer Sciences Department University of Wisconsin - Madison 0 West Dayton Street Madison,

More information

Matching Theory. Figure 1: Is this graph bipartite?

Matching Theory. Figure 1: Is this graph bipartite? Matching Theory 1 Introduction A matching M of a graph is a subset of E such that no two edges in M share a vertex; edges which have this property are called independent edges. A matching M is said to

More information

An Abstraction Algorithm for the Verification of Level-Sensitive Latch-Based Netlists

An Abstraction Algorithm for the Verification of Level-Sensitive Latch-Based Netlists An Abstraction Algorithm for the Verification of Level-Sensitive Latch-Based Netlists Jason Baumgartner ½, Tamir Heyman ¾, Vigyan Singhal, and Adnan Aziz ½ IBM Enterprise Systems Group, Austin, Texas 78758,

More information

A Computational Analysis of the Needham-Schröeder-(Lowe) Protocol

A Computational Analysis of the Needham-Schröeder-(Lowe) Protocol A Computational Analysis of the Needham-Schröeder-(Lowe) Protocol BOGDAN WARINSCHI Department of Computer Science and Engineering, University of California, San Diego 9500 Gilman Drive, CA 92093 bogdan@cs.ucsd.edu

More information

From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols

From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols SIAM Journal on Computing to appear From Static to Dynamic Routing: Efficient Transformations of StoreandForward Protocols Christian Scheideler Berthold Vöcking Abstract We investigate how static storeandforward

More information

Progress Towards the Total Domination Game 3 4 -Conjecture

Progress Towards the Total Domination Game 3 4 -Conjecture Progress Towards the Total Domination Game 3 4 -Conjecture 1 Michael A. Henning and 2 Douglas F. Rall 1 Department of Pure and Applied Mathematics University of Johannesburg Auckland Park, 2006 South Africa

More information

Satisfiability Coding Lemma

Satisfiability Coding Lemma ! Satisfiability Coding Lemma Ramamohan Paturi, Pavel Pudlák, and Francis Zane Abstract We present and analyze two simple algorithms for finding satisfying assignments of -CNFs (Boolean formulae in conjunctive

More information

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}.

Greedy Algorithms 1 {K(S) K(S) C} For large values of d, brute force search is not feasible because there are 2 d {1,..., d}. Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,

More information

On Universal Cycles of Labeled Graphs

On Universal Cycles of Labeled Graphs On Universal Cycles of Labeled Graphs Greg Brockman Harvard University Cambridge, MA 02138 United States brockman@hcs.harvard.edu Bill Kay University of South Carolina Columbia, SC 29208 United States

More information

Optimal Parallel Randomized Renaming

Optimal Parallel Randomized Renaming Optimal Parallel Randomized Renaming Martin Farach S. Muthukrishnan September 11, 1995 Abstract We consider the Renaming Problem, a basic processing step in string algorithms, for which we give a simultaneously

More information

Directed Single Source Shortest Paths in Linear Average Case Time

Directed Single Source Shortest Paths in Linear Average Case Time Directed Single Source Shortest Paths in inear Average Case Time Ulrich Meyer MPI I 2001 1-002 May 2001 Author s Address ÍÐÖ ÅÝÖ ÅܹÈÐÒ¹ÁÒ ØØÙØ ĐÙÖ ÁÒÓÖÑØ ËØÙÐ ØÞÒÙ Û ½¾ ËÖÖĐÙÒ umeyer@mpi-sb.mpg.de www.uli-meyer.de

More information

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006

2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 2386 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 6, JUNE 2006 The Encoding Complexity of Network Coding Michael Langberg, Member, IEEE, Alexander Sprintson, Member, IEEE, and Jehoshua Bruck,

More information

Indexing Mobile Objects Using Dual Transformations

Indexing Mobile Objects Using Dual Transformations Indexing Mobile Objects Using Dual Transformations George Kollios Boston University gkollios@cs.bu.edu Dimitris Papadopoulos UC Riverside tsotras@cs.ucr.edu Dimitrios Gunopulos Ý UC Riverside dg@cs.ucr.edu

More information

Approximation Algorithms for Item Pricing

Approximation Algorithms for Item Pricing Approximation Algorithms for Item Pricing Maria-Florina Balcan July 2005 CMU-CS-05-176 Avrim Blum School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 School of Computer Science,

More information

Acyclic Edge Colorings of Graphs

Acyclic Edge Colorings of Graphs Acyclic Edge Colorings of Graphs Noga Alon Ayal Zaks Abstract A proper coloring of the edges of a graph G is called acyclic if there is no 2-colored cycle in G. The acyclic edge chromatic number of G,

More information

Disjoint directed cycles

Disjoint directed cycles Disjoint directed cycles Noga Alon Abstract It is shown that there exists a positive ɛ so that for any integer k, every directed graph with minimum outdegree at least k contains at least ɛk vertex disjoint

More information

On Evasiveness, Kneser Graphs, and Restricted Intersections: Lecture Notes

On Evasiveness, Kneser Graphs, and Restricted Intersections: Lecture Notes Guest Lecturer on Evasiveness topics: Sasha Razborov (U. Chicago) Instructor for the other material: Andrew Drucker Scribe notes: Daniel Freed May 2018 [note: this document aims to be a helpful resource,

More information

External-Memory Breadth-First Search with Sublinear I/O

External-Memory Breadth-First Search with Sublinear I/O External-Memory Breadth-First Search with Sublinear I/O Kurt Mehlhorn and Ulrich Meyer Max-Planck-Institut für Informatik Stuhlsatzenhausweg 85, 66123 Saarbrücken, Germany. Abstract. Breadth-first search

More information

Byzantine Consensus in Directed Graphs

Byzantine Consensus in Directed Graphs Byzantine Consensus in Directed Graphs Lewis Tseng 1,3, and Nitin Vaidya 2,3 1 Department of Computer Science, 2 Department of Electrical and Computer Engineering, and 3 Coordinated Science Laboratory

More information

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007 CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be

More information

On-line multiplication in real and complex base

On-line multiplication in real and complex base On-line multiplication in real complex base Christiane Frougny LIAFA, CNRS UMR 7089 2 place Jussieu, 75251 Paris Cedex 05, France Université Paris 8 Christiane.Frougny@liafa.jussieu.fr Athasit Surarerks

More information

Algorithmic Aspects of Acyclic Edge Colorings

Algorithmic Aspects of Acyclic Edge Colorings Algorithmic Aspects of Acyclic Edge Colorings Noga Alon Ayal Zaks Abstract A proper coloring of the edges of a graph G is called acyclic if there is no -colored cycle in G. The acyclic edge chromatic number

More information

Online Scheduling for Sorting Buffers

Online Scheduling for Sorting Buffers Online Scheduling for Sorting Buffers Harald Räcke ½, Christian Sohler ½, and Matthias Westermann ¾ ½ Heinz Nixdorf Institute and Department of Mathematics and Computer Science Paderborn University, D-33102

More information

Abstract. A graph G is perfect if for every induced subgraph H of G, the chromatic number of H is equal to the size of the largest clique of H.

Abstract. A graph G is perfect if for every induced subgraph H of G, the chromatic number of H is equal to the size of the largest clique of H. Abstract We discuss a class of graphs called perfect graphs. After defining them and getting intuition with a few simple examples (and one less simple example), we present a proof of the Weak Perfect Graph

More information

Induced-universal graphs for graphs with bounded maximum degree

Induced-universal graphs for graphs with bounded maximum degree Induced-universal graphs for graphs with bounded maximum degree Steve Butler UCLA, Department of Mathematics, University of California Los Angeles, Los Angeles, CA 90095, USA. email: butler@math.ucla.edu.

More information

Graphs (MTAT , 4 AP / 6 ECTS) Lectures: Fri 12-14, hall 405 Exercises: Mon 14-16, hall 315 või N 12-14, aud. 405

Graphs (MTAT , 4 AP / 6 ECTS) Lectures: Fri 12-14, hall 405 Exercises: Mon 14-16, hall 315 või N 12-14, aud. 405 Graphs (MTAT.05.080, 4 AP / 6 ECTS) Lectures: Fri 12-14, hall 405 Exercises: Mon 14-16, hall 315 või N 12-14, aud. 405 homepage: http://www.ut.ee/~peeter_l/teaching/graafid08s (contains slides) For grade:

More information

QED Q: Why is it called the triangle inequality? A: Analogue with euclidean distance in the plane: picture Defn: Minimum Distance of a code C:

QED Q: Why is it called the triangle inequality? A: Analogue with euclidean distance in the plane: picture Defn: Minimum Distance of a code C: Lecture 3: Lecture notes posted online each week. Recall Defn Hamming distance: for words x = x 1... x n, y = y 1... y n of the same length over the same alphabet, d(x, y) = {1 i n : x i y i } i.e., d(x,

More information

A Bijection from Shi Arrangement Regions to Parking Functions via Mixed Graphs

A Bijection from Shi Arrangement Regions to Parking Functions via Mixed Graphs A Bijection from Shi Arrangement Regions to Parking Functions via Mixed Graphs Michael Dairyko Pomona College Schuyler Veeneman San Francisco State University July 28, 2012 Claudia Rodriguez Arizona State

More information

Interleaving Schemes on Circulant Graphs with Two Offsets

Interleaving Schemes on Circulant Graphs with Two Offsets Interleaving Schemes on Circulant raphs with Two Offsets Aleksandrs Slivkins Department of Computer Science Cornell University Ithaca, NY 14853 slivkins@cs.cornell.edu Jehoshua Bruck Department of Electrical

More information

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1

Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1 CME 305: Discrete Mathematics and Algorithms Instructor: Professor Aaron Sidford (sidford@stanford.edu) January 11, 2018 Lecture 2 - Graph Theory Fundamentals - Reachability and Exploration 1 In this lecture

More information

A Fast Algorithm for Optimal Alignment between Similar Ordered Trees

A Fast Algorithm for Optimal Alignment between Similar Ordered Trees Fundamenta Informaticae 56 (2003) 105 120 105 IOS Press A Fast Algorithm for Optimal Alignment between Similar Ordered Trees Jesper Jansson Department of Computer Science Lund University, Box 118 SE-221

More information

SAT-CNF Is N P-complete

SAT-CNF Is N P-complete SAT-CNF Is N P-complete Rod Howell Kansas State University November 9, 2000 The purpose of this paper is to give a detailed presentation of an N P- completeness proof using the definition of N P given

More information

HW Graph Theory SOLUTIONS (hbovik) - Q

HW Graph Theory SOLUTIONS (hbovik) - Q 1, Diestel 9.3: An arithmetic progression is an increasing sequence of numbers of the form a, a+d, a+ d, a + 3d.... Van der Waerden s theorem says that no matter how we partition the natural numbers into

More information

Distributed minimum spanning tree problem

Distributed minimum spanning tree problem Distributed minimum spanning tree problem Juho-Kustaa Kangas 24th November 2012 Abstract Given a connected weighted undirected graph, the minimum spanning tree problem asks for a spanning subtree with

More information

Lecture 6: Faces, Facets

Lecture 6: Faces, Facets IE 511: Integer Programming, Spring 2019 31 Jan, 2019 Lecturer: Karthik Chandrasekaran Lecture 6: Faces, Facets Scribe: Setareh Taki Disclaimer: These notes have not been subjected to the usual scrutiny

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

Chapter 4: Non-Parametric Techniques

Chapter 4: Non-Parametric Techniques Chapter 4: Non-Parametric Techniques Introduction Density Estimation Parzen Windows Kn-Nearest Neighbor Density Estimation K-Nearest Neighbor (KNN) Decision Rule Supervised Learning How to fit a density

More information

arxiv: v1 [math.co] 7 Dec 2018

arxiv: v1 [math.co] 7 Dec 2018 SEQUENTIALLY EMBEDDABLE GRAPHS JACKSON AUTRY AND CHRISTOPHER O NEILL arxiv:1812.02904v1 [math.co] 7 Dec 2018 Abstract. We call a (not necessarily planar) embedding of a graph G in the plane sequential

More information

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition. 18.433 Combinatorial Optimization Matching Algorithms September 9,14,16 Lecturer: Santosh Vempala Given a graph G = (V, E), a matching M is a set of edges with the property that no two of the edges have

More information

Discrete mathematics , Fall Instructor: prof. János Pach

Discrete mathematics , Fall Instructor: prof. János Pach Discrete mathematics 2016-2017, Fall Instructor: prof. János Pach - covered material - Lecture 1. Counting problems To read: [Lov]: 1.2. Sets, 1.3. Number of subsets, 1.5. Sequences, 1.6. Permutations,

More information

6. Lecture notes on matroid intersection

6. Lecture notes on matroid intersection Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm

More information

MC 302 GRAPH THEORY 10/1/13 Solutions to HW #2 50 points + 6 XC points

MC 302 GRAPH THEORY 10/1/13 Solutions to HW #2 50 points + 6 XC points MC 0 GRAPH THEORY 0// Solutions to HW # 0 points + XC points ) [CH] p.,..7. This problem introduces an important class of graphs called the hypercubes or k-cubes, Q, Q, Q, etc. I suggest that before you

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Discrete Mathematics Lecture 4. Harper Langston New York University

Discrete Mathematics Lecture 4. Harper Langston New York University Discrete Mathematics Lecture 4 Harper Langston New York University Sequences Sequence is a set of (usually infinite number of) ordered elements: a 1, a 2,, a n, Each individual element a k is called a

More information

Extremal Graph Theory: Turán s Theorem

Extremal Graph Theory: Turán s Theorem Bridgewater State University Virtual Commons - Bridgewater State University Honors Program Theses and Projects Undergraduate Honors Program 5-9-07 Extremal Graph Theory: Turán s Theorem Vincent Vascimini

More information