Planar graphs, negative weight edges, shortest paths, and near linear time

Size: px
Start display at page:

Download "Planar graphs, negative weight edges, shortest paths, and near linear time"

Transcription

1 Planar graphs, negative weight edges, shortest paths, and near linear time Jittat Fakcharoenphol Satish Rao Ý Abstract In this paper, we present an Ç Ò ÐÓ Òµ time algorithm for finding shortest paths in a planar graph with real weights. This can be compared to the best previous strongly polynomial time algorithm developed by Lipton, Rose, and Tarjan in 1978 which ran in Ç Ò ¾ µ time, and the best polynomial algorithm developed by Henzinger, Klein, Subramanian, and Rao in 1994 which ran in Ç Ò µ time. We also present significantly improved algorithms for query and dynamic versions of the shortest path problems. 1 Introduction The shortest path problem with real (positive and negative) weights is the problem of finding the shortest distance from a specified source node to all the nodes in the graph. For this paper, we assume that the graph actually has no negative cycles since the shortest path between two nodes will typically be undefined in the presence of negative cycles. In general, algorithms for the shortest path problem can, however, easily be modified to output a negative cycle, if one exists. This also holds for the algorithms in this paper. The shortest path problem has long been studied and continues to find applications in diverse areas. The problem has wide application even when the underlying graph is a grid graph. For example, there are recent image segmentation approaches that use negative cycle detection [4, 5]. Other of our favorite applications for planar graphs include separator algorithms [17], multi-source multi-sink flow algorithms [15], or algorithms for finding minimum weighted cuts. In 1958, Bellman and Ford [2, 7] gave an Ç ÑÒµ algorithm for finding shortest paths on an Ñ-edge, Ò- vertex graph with real edge weights. Gabow and Tarjan [10] showed that this problem could indeed be solved in Ç Ñ Ô Òµ 1 Their algorithm, however, depended on the Computer Science Division, University of California, Berkeley, CA jittat@cs.berkeley.edu. Supported by Fulbright Scholarship and the scholarship from the Faculty of Engineering, Kasetsart University, Thailand. Ý Computer Science Division, University of California, Berkeley, CA satishr@cs.berkeley.edu. 1 The Ç µ notation ignores logarithmic factors. values of the edge weights. For strongly polynomial algorithms, Bellman-Ford remains the best known. As for graphs with positive edge weights, the problem is much easier. For example, Dijkstra s shortest path algorithm can be implemented in Ç Ñ Ò ÐÓ Òµ time. For planar graphs, upon the discovery of planar separator theorems [14], an Ç Ò ¾ µ algorithm was given by Lipton, Rose, and Tarjan. [13]. Their algorithm is based on partitioning the graph into pieces, recursively computing distances on the borders of the pieces using numerous invocations of Dijkstra s algorithm to build a dense graph. Then they use the Bellman-Ford algorithm on the resulting dense graph to construct a global solution. Their algorithms worked not only for planar graphs but for any Ô Ò-separable one. 2 Combining a similar approach with a (non-strongly) polynomial algorithm of Goldberg [11] for general graphs, Henzinger et al. [12] give an Ç Ò µ algorithm for the shortest path problem in planar graphs (or any set of graphs with an Ç ÔÒµ sized separator). In this paper, we present an Ç Ò ÐÓ Òµ time algorithm for finding shortest paths in a planar graph with real weights. We also present algorithms for query and dynamic versions of the shortest path problems. 1.1 The idea Our approach is similar to the approaches discussed above in that it constructs a rather dense non-planar graph on a subset of nodes and then computes a shortest path tree in that graph. We observe that there exists a shortest path tree in this dense graph that must obey a non-crossing property in the geometric embedding of the graph inherited from the embedding of the original planar graph. Using this noncrossing condition, we can compute a shortest path tree of the dense graph in time that is near linear in the number of nodes in the dense graph and significantly less than linear in the number of edges. Specifically, we decompose our dense graph into a set of bipartite graphs whose distance matrices obey a noncrossing condition (called the Monge condition). Efficient algorithms for searching for minima in Monge arrays have been developed previously. See, for example, [1, 3]. 2 Assuming that they were given a recursive decomposition of the graph.

2 Our algorithm proceeds by combining Dijkstra s and the Bellman-Ford algorithms with methods for searching Monge matrices in sublinear time. We use an on-line method for searching Monge arrays with our version of Dijkstra s algorithm on the dense graph. We note that our algorithms heavily rely on planarity, whereas some of the previous methods only require that the graphs are separable. However, our methods are at least tolerant to a few violations to planarity. All of our results continue to hold when the graph can be embedded in the plane such that only Ç ÔÒµ edges cross. For example, our algorithms apply to a road map with a few crossing and non-intersecting highways. 1.2 Our results We give the following results. An Ç Ò ÐÓ Òµ algorithm for finding shortest paths in planar graphs with real weights. An algorithm that requires Ç Ò ÐÓ Òµ preprocessing time and answers distance queries between pairs of nodes in time Ç ÔÒ ÐÓ ¾ Òµ The best previous algorithms had a query, preprocessing time product of at least Ò ¾ µ, where ours is Ç Ò ¾ µ An algorithm that supports distance queries and update operations that change edge weights in amortized time Ç Ò ¾ ÐÓ Òµ per operation. This algorithm works for positive edge weights. An algorithm that supports distance queries and update operations that change edge weights in amortized time Ç Ò ÐÓ ½ Òµ per operation. This algorithm works for negative edge weights as well. We also present an on-line Monge searching problem and methods to solve it that may be novel and of independent interest. 1.3 More related work For planar graphs with positive edge weights, Henzinger et al. [12] gave an Ç Òµ time algorithm. Their work improves on work of Frederickson [9] who had previously given Ç Ò Ô ÐÓ Òµ algorithms for this problem. Frederickson [8] gave an improved all-pairs shortest path algorithm for planar graphs with small hammock decompositions. Djidjev et al. [6] gave dynamic algorithms whose complexity are linear in the size of the hammock decomposition. This could be quite efficient in certain cases, e.g. when the graph is outerplanar. But for general planar graphs, even grid graphs, their algorithms are no better than those in [13]. The similar binary searching technique that we use in the Monge searching problem also appeared in the algorithm for finding shortest paths on a 3-dimensional polygon by Mitchell et al. [16]. 2 Preliminaries Given a directed graph, Î µ, and a weight function on the directed edges Ê, a distance labelling for a source node is a function Î Ê, such that Úµ is the minimum over all to Ú paths È of È ¾È µ 2.1 Algorithms The algorithms we use work through a sequence of edge relaxations. The algorithms start with a labelling µ and choose an edge to relax. The relax operation proceeds for an edge Ù Úµ by setting the distance label Úµ to the minimum of Úµ and Ùµ µ Dijkstra s algorithm described below correctly computes a distance labelling when the weights on the edges are nonnegative, i.e., µ ¼ for all ¾ µ ¼ Úµ ½ Ú Ë while Ë Î do Ù findmin Î Ò Ëµ foreach Ù Úµ Úµ Ñ Ò Úµ Ùµ µµ /* This is an edge relaxation */ Ô Ô ½ Any implementation of the algorithm above correctly computes the shortest path labelling of the nodes in a graph. For distance functions where µ could be less than 0, Bellman and Ford suggested the following algorithm which is guaranteed to compute a distance labelling if there is no cycle in the graph whose total weight under µ is negative. µ ¼ Úµ ½ Ú Ô ¼ while Ô Ò do relax all edges. Ô Ô ½ 2.2 Feasible price functions and relabellings Let a function Ô Î Ê be a price function over the node set. The reduced cost function Ô over the edge set induced by the price function Ô is defined as Ô Ù Úµµ Ô Ùµ µ Ô Úµ It is well-known that the reduced cost function preserves the presence of negative cycles and also the shortest paths. We say that the price function Ô is feasible if and only if for all edges Ù Úµ, Ô Ù Úµ ¼. Hence, for any feasible price function Ô, we can find a distance labelling from any source node using Dijkstra s algorithm on the modified graph with Ô as weights, called the relabelled graph, and the distance labelling for the original graph can be easily recovered. We note that a valid set of distance labels for any source node is a feasible price function. Thus, to compute shortest paths from sources in a graph with negative weight edges can be accomplished with only one application of the Bellman-Ford algorithm and ½ applications of Dijkstra s algorithm.

3 3 The algorithm We proceed in this section with a description of our algorithm. In section 3.1, we define our main tool, the dense distance graph, which is an efficiently searchable representation of distances in the planar graph. In section 3.2, we show how to compute the graph inductively, by relying on some Monge data structures and efficient implementations of Dijkstra s algorithm and the Bellman-Ford algorithm. In section 3.3, we show how to use it to compute a shortest path labelling of the graph. In sections 3.4 to 3.6, we use the dense distance graph as the basis for query and dynamic shortest path algorithms. 3.1 The dense distance graph A decomposition of a graph is a set of subsets Ë ½ Ë ¾ Ë (not necessarily disjoint) such that the union of all the sets is Î and for all Ù Úµ ¾, Ù Ú ¾ Ë for some. A node Ú is a border nodes of a set Ë if Ú ¾ Ë and there exists an edge Ú Üµ where Ü ¾ Ë. We refer to the subgraph induced on a subset Ë as a piece of the decomposition. We assume that we are given a recursive decomposition where at each level, a piece with Ò-nodes and Ö border nodes is divided into 2 subpieces such that each subpiece has no more than ¾Ò nodes and at most Ô Ò border nodes. (The recursion stops when a piece contains a single edge.) In this recursive context, we define a border node of a subpiece to be any border node of the original piece or any new border node introduced by the decomposition of the current piece. It is convenient to define the level of a decomposition in the natural way, with the entire graph being the only piece in the level 0 decomposition, and the pieces of the decomposition of the entire graph being the level 1 pieces in the decomposition, and so on. A node is a level border node if it is a border node of a level piece. Note that a node may be a border node for many levels. Indeed, any level border node is also a level border node for all We assume, without loss of generality, that the graph is a bounded-degree graph. Moreover, we assume inductively that there is a planar embedding of any piece in the recursive decomposition where all the border nodes are on a single face and are circularly ordered. (We assume for simplicity and without loss of generality that each piece is connected.) One can find a recursive decomposition of the above form in Ç Ò ÐÓ Òµ time. See [12]. For each piece of the decomposition, we recursively compute the all-pairs shortest path distances between all its border nodes along paths that lie entirely inside the piece. We call this the dense distance graph of the planar graph. The level dense distance graph is the subgraph of the dense distance graph on the level border nodes. We refer to the level dense distance graph of a piece as the subgraph of the level dense distance graph whose edges correspond to paths that lie in the piece. This graph underlies previous algorithms for shortest paths in planar graphs. We give a better algorithm to construct and use it. 3.2 Computing the dense distance graph We assume (recursively), that we have the level ½ dense distance graph and the distances between all the border nodes of each piece. We will show how to find the edges of the level dense distance graph that correspond to a particular piece È. Recall that the level dense distance graph for È consists of the all-pairs shortest path distances between border nodes of its subpieces in the level ½ dense distance graph. Also, note that the level ½ distance graph may contain negative edges. By finding a feasible price function using a single Bellman-Ford computation from any source, however, we can find the shortest path distances from any other source using only the Dijkstra computation as stated in section 2.2. We proceed by doing a single Bellman-Ford computation in the level ½ dense distance graph of È from one border node, and then doing Ö ½ Dijkstra computations on the relabelled graph to compute the shortest path distances from the remaining border nodes. This, again, is exactly what previous researchers did. Their algorithms, however, used implementations for Bellman-Ford and Dijkstra which depended linearly on the number of edges that are present in the level ½ dense distance graph. Our methods depend near linearly on the number of nodes in the dense distance graph, which is the square root of the number of edges. We assume that the piece contains Ò nodes and Ô Ò border nodes. By a property of the decomposition, we assume that the number of border nodes of each of the 2 subpieces of È contain at most ¼Ô Ò border nodes. Thus, the level ½ dense distance graph contains at most Ç ÔÒµ nodes The Bellman-Ford step The Bellman-Ford algorithm that we run proceeds as follows. µ ¼ Úµ ½ Ú Ô ¼ while Ô Ç ÔÒµ do relax all edges. Ô Ô ½ The total number of boundary nodes in each subpiece of È is Ç ÔÒµ, so the number of edges is Ç Òµ. Therefore, if we relax every edge directly as in [13], the running time for each step of edge relaxation would be Ç Òµ for all of È. The total running time for the Bellman-Ford step would then be Ç Ò ¾ µ However, we will relax the edges in time that is near linear in the number of nodes, in particular Ç ÔÒ ÐÓ ¾ Òµ

4 (a) (b) (c) Figure 1: Partitions of nodes to form Ç ÐÓ Òµ Monge arrays: (a) the 1st and the 2nd arrays, (b) the 3rd and the 4th arrays, and (c) the 5th and the 6th arrays. time. This gives a running time of Ç Ò ÐÓ ¾ Òµ for the Bellman-Ford step. We accomplish this by maintaining the edges of each subpiece of È in Ç ÐÓ Òµ levels of Monge arrays. The edges in each Monge array can be relaxed in Ç ÐÓ µ time where is the number of nodes in the data structure. The first Monge array that we define is formed as follows. Divide the border nodes in some subpiece into 2 halves, the first (or left) half in the circular order (with an arbitrary starting point) and the second (or right) half. Consider the set of edges in the dense distance graph that go from the left border nodes to the right ones. The edges obey the Monge property, since the underlying shortest path tree need not cross. Using the same left-right partitioning, we can define another Monge array with the direction of edges reversed, i.e., edges in the array go from the right border nodes to the left border nodes. Successive Monge arrays, are constructed by recursively dividing the left and right halves further. Each node will occur in at most Ç ÐÓ Òµ data structures, and each edge will occur in one data structure. Figure 1 shows how we partition nodes and edges between them. We can relax all the edges in a Monge array as follows. The nodes on the left have a label associated with them, and a node Ú on the right must choose a left node Ù which minimizes Ùµ Ù Úµ. However, because of the planarity of the piece, the parent edges of two right nodes need not cross, and this gives us the Monge property. For this special case, we can use a standard divide-and-conquer technique to find all the parents in time Ç Ö ÐÓ Öµ, where the number of nodes in the Monge array is Ö The total number of nodes in all the data structures is Ç ÔÒ ÐÓ Òµ for an each subpiece of È in the decomposition. In È we have ¾ subpieces. Therefore, the time for relaxing all of È s edges is Ç ÔÒ ÐÓ ¾ Òµ. The number of phases the Bellman-Ford runs is the number of nodes in the longest path, which is Ç ÔÒµ. Thus, the time for the Bellman-Ford step is Ç Ò ÐÓ ¾ Òµ on an Ò-node piece. µ ¼ Úµ ½ Ú Ë for all Ë do AddToHeap(À Ë µ). while À is not empty do Ë Ñ Ò ExtractMin Àµ Ú ExtractMinInSubpiece(Ë Ñ Ò ) if Ú ¾ Ë, for all Ë Ú do ScanInSubpiece(Ë Ú Ú ). UpdateHeap(À, (FindMinInSubpiece(Ë ),Ë )). else UpdateHeap(À, (FindMinInSubpiece(Ë Ñ Ò ),Ë Ñ Ò )). Ë Ë Ú Figure 2: Pseudocode for Dijkstra implementation The Dijkstra step After one invocation of Bellman-Ford, we have a shortest path tree from some border node of È. Now, using the relabelling property, we can modify all edge weights so that they are all positive and the shortest paths remain unchanged. With these modified weights, we repeatedly apply Dijkstra s algorithm to compute all-pairs shortest distances among the border nodes in È. In order to compute the shortest path distances from each border node of È, we proceed as in Dijkstra. While working at level of the decomposition, we view the subpieces at level ½ each separately. Each subpiece maintains a data structure that allow us to scan a node (relax all edges in the dense distance graph adjacent to that node in that subpiece) and find the minimum labelled node in the subpiece efficiently. As in Dijkstra s algorithm, we will maintain a set of scanned nodes Ë and a global heap À for keeping minimum labelled nodes from all subpieces. Our implementation proceeds as follows. A node is extracted from the global heap. This node can belong to many subpieces, so we scan it in all the subpieces containing it. After we scanned the node, a minimum labelled node in some subpiece might change, so we have to update the entry of that subpiece in the global heap À. The primary difference between our implementation of the Dijkstra algorithm and the normal one is that in our implementation a node that is already scanned can appear again in the heap. This is because the data structure in each subpiece does not guarantee that a minimum border node after being scanned will never reappear as a minimum node again. The data structure does, however, guarantee that that node can reappear at most Ç ÐÓ Öµ times. Let Ë denote the subpieces in È. The pseudocode in figure 2 describes the algorithm for computing the shortest

5 path tree starting at a border node. It uses the following operations on data structures that are maintained for each of the subpieces. ScanInSubpiece(Ë Ú Ú ): Relax all edges of Ú in the dense distance graph at level ½ in piece Ë conditioned on Úµ Ú. A sequence of Ð calls to ScanInSubpiece can be implemented in Ç Ð ÐÓ ¾ Öµ time. FindMinInSubpiece(Ë ): Return the border node (which might already be scanned) in piece Ë whose label is no greater than all unscanned nodes in the piece. This procedure can be implemented in Ç ½µ time. ExtractMinInSubpiece(Ë ): Return the border node (which might already be scanned) in piece Ë whose label is no greater than all unscanned nodes in the piece, and attempt to remove it from the heap in the piece. No node can be returned by this procedure more than Ç ÐÓ Öµ times. A sequence of Ð calls to ExtractMinInSubpiece can be implemented in Ç Ð ÐÓ Öµ time. We will show how to implement this data structure in section 4.2. At this point we assume the bounds stated above and use them to bound the running time of the Dijkstra step. We stress that an already scanned node might be returned from FindMinInSubpiece and ExtractMinInSubpiece. The data structure does, however, guarantee that any border nodes will not be returned from ExtractMinInSubpiece more than Ç ÐÓ Öµ times. When the data structure for each subpiece returns a minimum labelled unscanned node, it is the minimum unscanned node in the subpiece. Also, the algorithm only scans a node which is the minimum over all nodes returned from all the subpieces. Therefore, every time the algorithm scans a node Ú, Ú has the minimum distance label over all unscanned nodes. Thus, our algorithm is a valid implementation of Dijkstra s algorithm in that it only scans the minimum labelled nodes. Thus, it correctly computes a shortest path labelling Analysis of the running time of the Dijkstra step For this piece, there are Ç Öµ Ç ÔÒµ border nodes in consideration. The number of calls to ScanInSubpiece is bounded by the number of nodes in the level ½ dense distance graph, because the degree for each node is bounded. Since each node is scanned once as in Dijkstra s algorithm, there are Ç ÔÒµ calls to ScanInSubpiece. ExtractMinInSubpiece is called at most Ç ÐÓ Öµ times on each node in the level ½ dense distance graph. So the total number of operations is Ç Ö ÐÓ Öµ for a total cost of Ç Ö ÐÓ ¾ Öµ Finally, the number of calls to FindMinInSubpiece is bounded by the number of calls to ScanInPiece for a subpiece. Therefore, the running time for computing each shortest path tree is Ç Ö ÐÓ ¾ Öµ Ç ÔÒ ÐÓ ¾ Òµ, and the total running time for computing all trees is Ç ÔÒ Ô Ò ÐÓ ¾ Òµ Ç Ò ÐÓ ¾ Òµ The running time for constructing the dense distance graph For each level of the decomposition, the time for doing the Bellman-Ford step is Ç Ò ÐÓ ¾ Òµ, and the time for all Dijkstra computations is Ç Ò ÐÓ ¾ Òµ. Since there are at most Ç ÐÓ Òµ levels in the decomposition, the time to construct the whole dense distance graph is Ç Ò ÐÓ Òµ. 3.3 Shortest path To actually solve the shortest path problem for a source, we use the dense distance graph as follows. We add as a border node to all the pieces that contain it and compute the dense distance graph on the resulting decomposition. We compute a shortest path labelling for the source in the level 1 dense distance graph to the border nodes using the Bellman-Ford algorithm above. We then extend the distances to the internal nodes recursively, again, using the Bellman-Ford algorithm. The Bellman-Ford computation costs Ç Ò ÐÓ ¾ Òµ for each level. Therefore, the running time for computing all the distances is Ç Ò ÐÓ Òµ. 3.4 Supporting queries when the graph is static The dense distance graph and the Dijkstra procedure above can be used to answer shortest path queries between a pair of nodes. In this section we show how to use the dense distance graph to find the shortest distance between any pair of nodes in Ç ÔÒ ÐÓ ¾ Òµ time. The algorithm for this is very similar to the one for the Dijkstra step in the shortest path algorithm. Suppose the query is for the distance of a pair Ù Úµ. The shortest Ù Úµ path can be viewed as a sequence of paths between border nodes of the nested pieces that contain Ù and Ú. The lengths of these paths is represented in the dense distance graph as an edge between border nodes and enclosing border nodes or as an edge among border nodes of a piece. Thus, we can perform a Dijkstra s computation on this subgraph of the dense distance graph to compute the shortest Ù Úµ path. We derive the bound on the number of border nodes in the pieces containing Ù as follows. Each piece, except the first one, which is, is a piece in the decomposition of the other. Hence the number of nodes goes down geometrically. Also, the number of border nodes, which is bounded above by the square root of the number of nodes in the piece, goes

6 down geometrically. Therefore the number of border nodes involved is Ç ÔÒµ. We use the same algorithm as in the Dijkstra step, but now we work with many pieces from many levels of the decomposition. The algorithm in the Dijkstra step continues to work in time Ç ÐÓ ¾ Òµ where is the total number of nodes involved in the Dijkstra step. Since the total number of nodes in the graph that we are searching is Ç ÔÒµ, the running time is bounded by Ç ÔÒ ÐÓ ¾ Òµ. 3.5 Dynamic algorithms for graphs with only positive edge weights (a) (c) 1st level separators 2nd level separators 3nd level separators e sibling pieces of activated pieces which are also in an activated graph an updated edge (b) (d) A dynamic data structure answers shortest path queries and allows edge cost updates, where the cost of an edge may be decreased or increased. (Edge additions and deletions are not addressed in this paper.) The query algorithm in the previous subsection only explicitly works with the pieces in the recursive decompositions that contain the query pair. It can avoid the other pieces because all the distances in those pieces are reflected in the distances among the border nodes of the pieces containing them. In the dynamic version, we will use the same algorithm as in the query-only case. Since our query step uses a Dijkstra algorithm, it is crucial that all weights are non-negative. However, some update might introduce a negative edge in the relabelled graph. To simplify the presentation, we first discuss the case that all edges have positive weights in this section. In the following section we extend the idea to the general case. We do not know how to efficiently maintain an explicit representation of the distance graph when an update occurs. But, only the pieces containing an update edge will not have the correct distances among their border nodes. That is, any edge in the dense distance graph between two border nodes of a region containing an update edge no longer has an accurate distance label. Any other edge in the dense distance graph has the correct label. We call the pieces that contain updated edges activated pieces and call the border nodes of these pieces activated nodes. (See figure 3(b) for example.) To properly recompute the distance for a piece Ô that contains an update edge, we need to consider the distances among all the border nodes of pieces that are contained in Ô. Thus, we define the activated graph to be all the valid edges corresponding to border nodes of the pieces containing an update edge and their sibling pieces. (See figure 3(c).) We answer a query for a pair Ù Úµ by adding the valid edges of border nodes of pieces containing Ù and Ú to the activated graph and running a Dijkstra s computation on the resulting graph. We call this graph the extended activated graph for Ù Úµ. (See figure 3(d).) We proceed by deriving a bound on the number of nodes involved in the computation assuming that we allow a maximum of updates before rebuilding the entire data struce activated pieces activated nodes if e is updated other border nodes u e v nodes in an extended activated graph for (u,v) other border nodes Figure 3: (a) A graph with a 3-level decomposition. (b) Activated nodes for Ù Úµ. (c) An activated graph when is updated. (d) An extended activated graph for Ù Úµ. ture. For each update, the number of border nodes on the pieces that need to be in the Dijkstra computation is Ç ÔÒµ. Naively, one can bound the total number of activated nodes by Ç Ô Òµ In fact, if we consider a top-down process that divides any piece that contains an update edge, we can show that the total number of activated nodes is Ç Ô Ò µ as follows. Consider the decomposition tree. There are at most leafs that are activated. Hence, at most ½ pieces have both their children activated; call these pieces branching pieces. Because the number of nodes goes down geometrically along the tree, we can bound the total number of activated border nodes using the number of border nodes of the branching pieces. The worst case is that all ½ branching pieces are in the highest level of the decomposition tree, i.e., they form a balanced binary tree. We note that the pieces on the same level partition the the graph; thus, the number of border nodes is maximized when they partition the graph evenly. Hence, on level Ð there are at most ¾ ÐÔ Ô Ò ¾ РҾРborder nodes. The sum on the last level dominates the total sum; therefore, the number of border nodes is Ç Ô Ò µ Thus, the Dijkstra computation described in section will run in time Ç Ô Ò ÐÓ ¾ Òµ Thus, the total cost of a sequence of updates and queries is Ç Ô Ò ÐÓ ¾ Òµ for the queries plus Ç Ò ÐÓ Òµ

7 for (re)building the dense distance graph. By choosing to be Ò ½ ÐÓ ¾ Ò we get an amortized complexity of Ç Ò ¾ ÐÓ Òµ per operation. 3.6 Dynamic algorithms for graphs with negative edge weights We follow the same strategy in this case, as well. That is, we simply maintain the notion of the activated graph during a sequence of updates. To answer a query for a pair Ù Úµ, we compute a distance labelling in the extended activated graph for Ù Úµ. Unfortunately, there may be negative edges in the extended activated graph so we cannot just do a Dijkstra computation as above. We note that if we have a feasible price function over the node set of the extended activated graph, if only one edge Ù Úµ is updated with a negative weight Û, we can use one computation of Dijkstra s algorithm to update the price function as follows. We compute the shortest distance labels µ of all the nodes starting from Ú. If Ùµ is greater than Û, changing the weight of does not introduce any edge with a negative reduced cost on the graph with µ as a price function; hence, we can update and update the price function to be µ. Therefore, if we already have updates, we can compute a feasible price function in the extended activated graph by performing Dijkstra computations by starting with the original price function on the extended activated graph, and for each update, we update the price function as described above. After we have the feasible price function for the extended activated graph which includes all the updates, we can proceed as in the previous section. After queries and updates, we rebuild the dense distance graph. Thus, the total time for a sequence of queries and updates is Ç ¾Ô Ò ÐÓ ¾ Òµ for recomputing the price funtion and Ç Ò ÐÓ Òµ for (re)building the dense distance graph. By choosing to be Ò ½ ÐÓ ¾ Ò, we get an amortized complexity of Ç Ò ÐÓ ½ Òµ per operation. 4 Monge searching data structures In this section, we describe the data structure that underlies the algorithms. We describe the general setting of the bipartite Monge searching problem in subsection 4.1. We develop an online version of the data structure in section 4.2. The data structure is extended to handle the nonbipartite case in section 4.3. We note that the interface of this data structure is rather involved. The data structure was used mainly in the Dijkstra step of the algorithms. Also, the technique for reducing the general case to the bipartite case is used in the edge relaxation step in our implementation of Bellman-Ford. 4.1 Bipartite Monge searching For a bipartite graph µ, a right-matching is a set Å such that for each Ú ¾, Ù Úµ ¾ Å ½. If and are ordered sets, we call a rightmatching Å a Monge right-matching iff for every pair of matches ٠ܵ Ú Ýµ ¾ Å, if Ù Ú in, then Ü Ý in. An example of a bipartite graph with a distance function that has the Monge property is the following. Consider a bipartite graph µ with the distance function Ê satisfying the condition Ù Ûµ Ú Üµ ٠ܵ Ú Ûµ (1) for all nodes Ù Ú ¾, and Û Ü ¾. We note that for any right-matching Å there exists a Monge right-matching Å ¼ having no greater cost. We can use standard divide-and-conquer techniques to derive an Ç Ò ÐÓ Òµ algorithm for finding the minimum Monge matching on this graph On-line bipartite Monge searching The condition on the distance function above still holds when an offset distance Úµ on each left node Ú is given, i.e., the cost for an edge Ù Úµ is Ùµ Ù Úµ. We now consider an on-line version of this problem in which the offset distances on the left-side nodes are to be specified on-line. We are given a bipartite graph µ with a distance function over edges satisfying (1) and a not-fullyspecified initial distance for every node in ; the cost for an edge Ù Úµ being in the matching is now Ùµ Ù Úµ. Initially the distance Úµ ½ for every Ú ¾, and Úµ for some Ú will be specified once, over the life of the data structure. We want the data structure to maintain the best Monge right-matching. To make the interface suitable for the application of the data structure, we introduce a growing subset Ë. Initially, Ë. We only allow the user to (1) query for the best matched node Ú ¾ Ò Ëµ and (2) add the current best matched node Ú ¾ Ò˵ to Ë. Basically, the set Ë denotes the set of nodes in that have the correct matches. One of the interpretations for a node to have the correct match in the on-line setting is the following. In the context of the Dijkstra algorithm, the minimum node Ú in Ò Ë certainly has the correct match (or correct label) when it is the minimum labelled node over all labelled nodes not in Ë in all the subpieces. One can see from this example why the data structure itself cannot decide whether the current minimum node has the correct math. When the current minimum node has the correct match, the user must add it to Ë to be able to query for other next best matched nodes, since the data structure only allows queries for the current best matched node outside the correctly matched set. 3 The idea is to find the correct parent of the middle right node first by checking all the left nodes, and then recurse on the top and bottom half of the right nodes. You will only look at each left node once for each recursive level.

8 Without loss of generality, assume that Ò. Let an be the ordered set ½ Ò and be the ordered set ½ Ò. The data structure maintains the initial distance variables Úµ for all Ú ¾ and the subset Ë, and supports the following operations. ActivateLeft(Ù Ù ): for Ù ¾, set Ùµ Ù. FindNextMinNode(): return a node Ú ¾ such that Ú Ö Ñ Ò Ú¾ Ë Ùµ Ù Úµ. AddCurrentMinNodeToS(): set Ë Ë FindNextMinNode(). To build this data structure, we will use an interval tree, which, for an ordered set ½ Ð, supports a query of the form Ñ Ò Ø, for any given ½ Ø Ð. The interval tree can be implemented using a balanced binary tree. The time for each query is Ç ÐÓ Ðµ, where Ð is the size of the ordered set. The data structure maintains: for each node in, whether it is active, the left neighbor tree Æ, a ordered (by index) binary tree for nodes in which are the best left neighbors for some right node. the heap À for the minimum edges µ of every ¾ Æ. the best left node data structure Å, which stores for each node Ú ¾ its best left node in. We use a binary tree storing the triplet µ Ø µµ to implement Å. Also, for each active node ¾, the data structure maintains the range µ Ø µµ, and the data structure is given an interval tree Ì representation for the values ½ µ Ò µ 4 The algorithm maintains the invariant that for every ¾ which has the initial distance µ ½, the node is the best left node for the right nodes in its range µ Ø µµ, i.e., µ µ µ µ for all µ Ø µ and ¾. Note that this implies that the ranges µ Ø µµ are nonoverlapping. (For example, see figure 4(a).) We now describe how each operation is performed. ActivateLeft(Ù Ù ): If Ù is the first one activated, let Ùµ ½ and Ø Ùµ Ò ½, and insert Ù into Æ. Otherwise, we will find a set of right nodes Ô Õ of which the node Ù is now the minimum left node. 4 These interval trees must be constructed a priori, e.g., when the distances µ were computed. The data structure is given these along with the representation of the distance function as input. This is easy to add to the representation of the dense distance graph. a set of activated left nodes (a) their intervals Newly activated node n o u n r (b) Figure 4: (a) A bipartite graph with activated nodes and their intervals. (b) Changes after Ù is activated. We describe how to find Ô, the top-most right match of Ù. Suppose that Ù. Denote the nodes in Æ as Ò ½ Ò Ð, ordered as in. We find the maximum index such that Ò Ù in. If there is no such node, we let Ô ½. Otherwise, we find the activated left node Ò Ó whose interval contains Ô by sequentially comparing, for ½ ½, the distance label of the top right match of Ò, Ò µ, with the new distance label it will get from Ù. We continue comparing until the distance from Ù is no better than the old one. Therefore, we know that Ô ¾ ÒÓµ Ø ÒÓµ ½ and we can binary search for Ô. We use a similar method to find Õ. Let Ò Ö be the leftside node that has Õ in its interval. We need to modify the data structure for these changes. The internal data structures affected are the tree Æ, the heap À, and the best left node data structure Å. The nodes Ò Ó and Ò Ö that previously had Ô and Õ in their intervals have to shrink their intervals. All other nodes Ò Ó ½ Ò Ö ½ will have their interval removed, and they are removed from Æ. We set Ùµ Ô and Ø Ùµ Õ ½. Finally, for every node whose interval is affected, we have to find its new best right neighbor and update the heap À accordingly. That is, we delete the entries corresponding to Ò Ó ½ Ò Ö ½ and we modify the entries corresponding to Ò Ó and Ò Ö Figure 4(b) shows how the data structure is modified after Ù is activated. FindNextMinNode(): We use À to find the minimum edge µ and return. AddCurrentMinNodeToS(): Let FindNextMinNode(). Use Å to find the best left matched node of. We create two new nodes ¼ and ¼¼ and put them next to in Æ such that ¼ ¼¼. We set ¼ µ µ Ø ¼ µ ¼¼ µ ½ and b p b q

9 Ø ¼¼ µ Ø µ. Also we set µ and Ø µ ½, and remove µ from the heap À. Finally, for ¼ and ¼¼, we use s interval tree to find their best right neighbors and add them to À. We note that we can eliminate the use of the best left node data structure Å by allowing FindNextMinNode to return together with the minimum node its left neighbor, i.e., FindNextMinNode returns the edge µ. However, this does not improve the running time Analysis of the running time We note that the size of Æ might be greater than Ò during the execution of the algorithm because we create some nodes every time AddCurrentMinNodeToS is called. However, it is called at most Ò times; thus, we create no more than ¾Ò Ç Òµ nodes. We now analyze the running time for each operation. ActivateLeft(Ù Ù ): In the beginning, searching for the index in Æ takes time Ç ÐÓ Òµ. To find Ô we do a sequential search and a binary search. Every node in Æ that we examined during the sequential search is removed except the last one. We charge the cost for the sequential search to the cost for removing and updating these nodes. The cost for the binary search is Ç ÐÓ Òµ. The search for the lower end costs the same. Then, the node has to pick its best right neighbor and add it to À. This can be done in Ç ÐÓ Òµ time. After the interval is found, some other node in Æ must update its data structure. At most 2 nodes have to change their intervals, re-pick their best right neighbors, and update their entries in À; this takes Ç ÐÓ Òµ time. All other nodes are deleted and will never reappear. Each delete takes time Ç ÐÓ Òµ and we charge this to the time the node was inserted to the data structure. Therefore, the operation takes Ç ÐÓ Òµ amortized time. FindNextMinNode(): We can read the top-most item in À in Ç ½µ time. AddCurrentMinNodeToS(): We can find the current minimum node in Ç ½µ time. It takes Ç ÐÓ Òµ time to find the left matched node for. We then do another Ç ½µ operations on Æ and À which take Ç ÐÓ Òµ time. Therefore, AddCurrentMinNodeToS runs in time Ç ÐÓ Òµ. 4.3 Non-bipartite on-line Monge searching We generalize our data structure to support the case when the graph is not bipartite in this section. We have the graph Î µ with the distance function Ê. The nodes in Î are in a circular order, and the distance function satisfies the property that Ù Ûµ Ú Üµ ٠ܵ Ú Ûµ (2) for every Ù Ú Û Ü ¾ Î such that Ù Ú Û Ü in Î. Notice that the sign of the inequality is reversed because in this case Ù Ûµ crosses Ú Üµ, contrary to the bipartite case that ٠ܵ crosses Ú Ûµ. This general case can be reduced to Ç ÐÓ Òµ bipartite cases. The idea is as explained in section From the graph, we create ¾ ÐÓ Ò bipartite graphs, because for each left-right partition, edges between them can go in two directions. We denoted these bipartite graphs as ¼ ½ ¾ ÐÓ Ò ½. Under this reduction, each edge belongs to one and only one bipartite graph. We refer to each bipartite graph as a level of. Let denote the set of these Ç ÐÓ Òµ bipartite graphs. The operations that we need from this non-bipartite data structure are the following. We want to be able to set the initial offset distance as in the bipartite case, and also, we want to find the minimum labelled node. The minimum labelled node over the graph is the minimum one over all the levels. However, the notion of the set Ë is different now. Suppose that a node Ú is the current minimum labelled node with label ¼ Úµ on the level bipartite graph. When Ú has the correct match, its reach this minimum only on the level. On the other levels, the labels of Ú do not necessarily reach their minima, i.e., they can still change. Therefore, we cannot put Ú to Ë in all the other levels, because it can affect how we search for the interval of some unactivated left node. Hence, we only add Ú to the set Ë of the level bipartite graph. This has a drawback, i.e., the call to Find- NextMinNode can return Ú again. However, because each node belongs to at most Ç ÐÓ Òµ levels, the node Ú can reappear at most Ç ÐÓ Òµ times. The data structure for the non-bipartite case consists of Ç ÐÓ Òµ data structures for the bipartite cases, for all ¾. It maintains a heap À ¼ of minimum nodes over all levels, and initially the distance offset Úµ ½ for all Ú in all levels. To make the names of the procedures consistent with the algorithm that constructs the dense distance graph, we call these procedures ScanInSubpiece, FindMinInSubpiece, and ExtractMinInSubpiece instead of ActivateNode, FindMin, and ExtractMin, respectively. We now describe the operations that the data structure supports together with their implementation and the running time. ScanInSubpiece(Ú Ú ): let Úµ Ú. This operation can be done by calling ActivateLeft(Ú Ú ) on every ¾ of which Ú is in the left-side node. On those affected levels, we call FindMin and update their entries in the heap À ¼. This operation can be done in Ç ÐÓ ¾ Òµ amortized time, because there are Ç ÐÓ Òµ levels and each call to ActivateLeft costs Ç ÐÓ Òµ amortized time. The time for finding the minimum nodes and updating the

10 heap is only Ç ÐÓ Ò ÐÓ ÐÓ Òµ, because the heap À is of size Ç ÐÓ Òµ. FindMinInSubpiece(): node over all levels. find the minimum distance This can be done in time Ç ½µ by returning the minimum entry in the heap À ¼. ExtractMinInSubpiece(): find the minimum distance node over all levels, remove that node from its level, and attempt to add the node to the set Ë of the data structure. For this operation, we do as in FindMinInSubpiece, but after the minimum node is found, we call AddCurrent- MinToS once on the level to which the minimum node belongs and update that level s entry in À ¼. The cost for AddCurrentMinToS is Ç ÐÓ Òµ time, and the cost for updating À ¼ is Ç ÐÓ ÐÓ Òµ. Therefore, this operation can be done in Ç ÐÓ Òµ time. As noted in the discussion above, after Ç ÐÓ Òµ attempts to add a node to Ë, that node will never appear as a minimum node again. Acknowledgments We would like to thank Chris Harrelson for his careful reading of this paper. References [1] Alok Aggarwal, Amotz Bar-Noy, Samir Khuller, Dina Kravets, and Baruch Schieber. Efficient Minimum Cost Matching and Transportation Using the Quadrangle Inequality. Journal of Algorithms, 19(1): , [2] R. E. Bellman. On a Routing Problem. Quart. Appl. Math, 16:87 90, [3] Samuel R. Buss and Peter N. Yianilos. Linear and Ç Ò ÐÓ Òµ Time Minimum-Cost Matching Algorithms for Quasi-Convex Tours. SIAM Journal on Computing, 27(1): , [8] Greg N. Frederickson. A new approach to all pairs shortest paths in planar graphs (extended abstract). In Proceedings of the Nineteenth Annual ACM Symposium on Theory of Computing, pages 19 28, May [9] Greg N. Frederickson. Fast algorithms for shortest paths in planar graphs, with applications. SIAM Journal on Computing, 16(6): , December [10] Harold N. Gabow and Robert E. Tarjan. Faster Scaling Algorithm for Network Problems. SIAM Journal on Computing, 18(5): , [11] Andrew V. Goldberg. Scaling algorithms for the shortest path problem. SIAM Journal on Computing, 21(1): , [12] Monika R. Henzinger, Philip N. Klein, Satish Rao, and Sairam Subramanian. Faster Shortest-Path Algorithms for Planar Graphs. Journal of Computer and System Sciences, 55(1):3 23, [13] R. Lipton, D. Rose, and R. E. Tarjan. Generalized nested dissection. SIAM Journal on Numerical Analysis, 16: , [14] Richard J. Lipton and Robert E. Tarjan. A separator theorem for planar graphs. SIAM Journal on Applied Mathematics, 36: , [15] G. Miller and J. Naor. Flow in planar graphs with multiple sources and sinks. SIAM Journal on Computing, 24: , [16] Joseph S. B. Mitchell, David M. Mount, and Christos H. Papadimitriou. The discrete geodesic problem. SIAM Journal on Computing, 16(4): , [17] Satish B. Rao. Faster algorithms for finding small edge cuts in planar graphs (extended abstract). In In Proceedings of the Twenty-Fourth Annual ACM Symposium on the Theory of Computing, pages , May [4] I. J. Cox, S. B. Rao, and Y. Zhong. Ratio Regions : A Technique for Image Segmentation. In Proceedings International Conference on Pattern Recognition, pages IEEE, Aug [5] L.A. Costa D. Geiger, A. Gupta and J. Vlontzos. Dynamic programming for detecting, tracking and matching elastic contours. IEEE Trans. On Pattern Analysis and Machine Intelligence, [6] Hristo N. Djidjev, Grammati E. Pantziou, and Christos D. Zaroliagis. Computing Shortest Paths and Distances in Planar Graphs. In Proc. 18th ICALP, pages Springer- Verlag, [7] L. R. Ford and D. R. Fulkerson. Flows in Networks. Princeton Univ. Press, Princeton, NJ, 1962.

Planar graphs, negative weight edges, shortest paths, and near linear time

Planar graphs, negative weight edges, shortest paths, and near linear time Journal of Computer and System Sciences 72 (2006) 868 889 www.elsevier.com/locate/jcss Planar graphs, negative weight edges, shortest paths, and near linear time Jittat Fakcharoenphol a,,1, Satish Rao

More information

arxiv: v2 [cs.ds] 14 Sep 2010

arxiv: v2 [cs.ds] 14 Sep 2010 Multiple-source single-sink maximum flow in directed planar graphs in O(n 1.5 log n) time arxiv:1008.5332v2 [cs.ds] 14 Sep 2010 Philip N. Klein and Shay Mozes Brown University October 28, 2018 Abstract

More information

Scan Scheduling Specification and Analysis

Scan Scheduling Specification and Analysis Scan Scheduling Specification and Analysis Bruno Dutertre System Design Laboratory SRI International Menlo Park, CA 94025 May 24, 2000 This work was partially funded by DARPA/AFRL under BAE System subcontract

More information

Designing Networks Incrementally

Designing Networks Incrementally Designing Networks Incrementally Adam Meyerson Kamesh Munagala Ý Serge Plotkin Þ Abstract We consider the problem of incrementally designing a network to route demand to a single sink on an underlying

More information

Algorithms for planar graphs. Jittat Fakcharoenphol. B.E. (Kasetsart University) 1997 M.S. (University of California, Berkeley) 2001

Algorithms for planar graphs. Jittat Fakcharoenphol. B.E. (Kasetsart University) 1997 M.S. (University of California, Berkeley) 2001 Algorithms for planar graphs by Jittat Fakcharoenphol B.E. (Kasetsart University) 1997 M.S. (University of California, Berkeley) 2001 A dissertation submitted in partial satisfaction of the requirements

More information

Treewidth and graph minors

Treewidth and graph minors Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under

More information

A note on Baker s algorithm

A note on Baker s algorithm A note on Baker s algorithm Iyad A. Kanj, Ljubomir Perković School of CTI, DePaul University, 243 S. Wabash Avenue, Chicago, IL 60604-2301. Abstract We present a corrected version of Baker s algorithm

More information

Reducing Directed Max Flow to Undirected Max Flow and Bipartite Matching

Reducing Directed Max Flow to Undirected Max Flow and Bipartite Matching Reducing Directed Max Flow to Undirected Max Flow and Bipartite Matching Henry Lin Division of Computer Science University of California, Berkeley Berkeley, CA 94720 Email: henrylin@eecs.berkeley.edu Abstract

More information

arxiv: v2 [cs.dm] 28 Dec 2010

arxiv: v2 [cs.dm] 28 Dec 2010 Multiple-source multiple-sink maximum flow in planar graphs Yahav Nussbaum arxiv:1012.4767v2 [cs.dm] 28 Dec 2010 Abstract In this paper we show an O(n 3/2 log 2 n) time algorithm for finding a maximum

More information

Probabilistic analysis of algorithms: What s it good for?

Probabilistic analysis of algorithms: What s it good for? Probabilistic analysis of algorithms: What s it good for? Conrado Martínez Univ. Politècnica de Catalunya, Spain February 2008 The goal Given some algorithm taking inputs from some set Á, we would like

More information

Shortest Paths in Planar Graphs with Real Lengths in O(nlog 2 n/loglogn) Time

Shortest Paths in Planar Graphs with Real Lengths in O(nlog 2 n/loglogn) Time Shortest Paths in Planar Graphs with Real Lengths in O(nlog 2 n/loglogn) Time Shay Mozes 1 and Christian Wulff-Nilsen 2 1 Department of Computer Science, Brown University, Providence, RI 02912, USA shay@cs.brown.edu

More information

Single source shortest paths in H-minor free graphs

Single source shortest paths in H-minor free graphs Single source shortest paths in H-minor free graphs Raphael Yuster Abstract We present an algorithm for the Single Source Shortest Paths (SSSP) problem in directed H- minor free graphs. For every fixed

More information

On Covering a Graph Optimally with Induced Subgraphs

On Covering a Graph Optimally with Induced Subgraphs On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number

More information

Optimal Static Range Reporting in One Dimension

Optimal Static Range Reporting in One Dimension of Optimal Static Range Reporting in One Dimension Stephen Alstrup Gerth Stølting Brodal Theis Rauhe ITU Technical Report Series 2000-3 ISSN 1600 6100 November 2000 Copyright c 2000, Stephen Alstrup Gerth

More information

Minimum Cost Edge Disjoint Paths

Minimum Cost Edge Disjoint Paths Minimum Cost Edge Disjoint Paths Theodor Mader 15.4.2008 1 Introduction Finding paths in networks and graphs constitutes an area of theoretical computer science which has been highly researched during

More information

3 No-Wait Job Shops with Variable Processing Times

3 No-Wait Job Shops with Variable Processing Times 3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select

More information

Online Facility Location

Online Facility Location Online Facility Location Adam Meyerson Abstract We consider the online variant of facility location, in which demand points arrive one at a time and we must maintain a set of facilities to service these

More information

A General Greedy Approximation Algorithm with Applications

A General Greedy Approximation Algorithm with Applications A General Greedy Approximation Algorithm with Applications Tong Zhang IBM T.J. Watson Research Center Yorktown Heights, NY 10598 tzhang@watson.ibm.com Abstract Greedy approximation algorithms have been

More information

On the logspace shortest path problem

On the logspace shortest path problem Electronic Colloquium on Computational Complexity, Report No. 3 (2016) On the logspace shortest path problem Boris Brimkov and Illya V. Hicks Computational & Applied Mathematics, Rice University, Houston,

More information

Optimal Time Bounds for Approximate Clustering

Optimal Time Bounds for Approximate Clustering Optimal Time Bounds for Approximate Clustering Ramgopal R. Mettu C. Greg Plaxton Department of Computer Science University of Texas at Austin Austin, TX 78712, U.S.A. ramgopal, plaxton@cs.utexas.edu Abstract

More information

Computing optimal linear layouts of trees in linear time

Computing optimal linear layouts of trees in linear time Computing optimal linear layouts of trees in linear time Konstantin Skodinis University of Passau, 94030 Passau, Germany, e-mail: skodinis@fmi.uni-passau.de Abstract. We present a linear time algorithm

More information

Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms

Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms Efficiency versus Convergence of Boolean Kernels for On-Line Learning Algorithms Roni Khardon Tufts University Medford, MA 02155 roni@eecs.tufts.edu Dan Roth University of Illinois Urbana, IL 61801 danr@cs.uiuc.edu

More information

Chapter 9 Graph Algorithms

Chapter 9 Graph Algorithms Introduction graph theory useful in practice represent many real-life problems can be if not careful with data structures Chapter 9 Graph s 2 Definitions Definitions an undirected graph is a finite set

More information

22 Elementary Graph Algorithms. There are two standard ways to represent a

22 Elementary Graph Algorithms. There are two standard ways to represent a VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph

More information

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È.

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. Let Ò Ô Õ. Pick ¾ ½ ³ Òµ ½ so, that ³ Òµµ ½. Let ½ ÑÓ ³ Òµµ. Public key: Ò µ. Secret key Ò µ.

More information

PLANAR GRAPH BIPARTIZATION IN LINEAR TIME

PLANAR GRAPH BIPARTIZATION IN LINEAR TIME PLANAR GRAPH BIPARTIZATION IN LINEAR TIME SAMUEL FIORINI, NADIA HARDY, BRUCE REED, AND ADRIAN VETTA Abstract. For each constant k, we present a linear time algorithm that, given a planar graph G, either

More information

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( )

Let the dynamic table support the operations TABLE-INSERT and TABLE-DELETE It is convenient to use the load factor ( ) 17.4 Dynamic tables Let us now study the problem of dynamically expanding and contracting a table We show that the amortized cost of insertion/ deletion is only (1) Though the actual cost of an operation

More information

Improved Algorithm for Minimum Flows in Bipartite Networks with Unit Capacities

Improved Algorithm for Minimum Flows in Bipartite Networks with Unit Capacities Improved Algorithm for Minimum Flows in Bipartite Networks with Unit Capacities ELEONOR CIUREA Transilvania University of Braşov Theoretical Computer Science Departement Braşov, Iuliu Maniu 50, cod 500091

More information

Chapter 9 Graph Algorithms

Chapter 9 Graph Algorithms Chapter 9 Graph Algorithms 2 Introduction graph theory useful in practice represent many real-life problems can be if not careful with data structures 3 Definitions an undirected graph G = (V, E) is a

More information

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi

CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I Instructor: Shaddin Dughmi Announcements Posted solutions to HW1 Today: Combinatorial problems

More information

1. Meshes. D7013E Lecture 14

1. Meshes. D7013E Lecture 14 D7013E Lecture 14 Quadtrees Mesh Generation 1. Meshes Input: Components in the form of disjoint polygonal objects Integer coordinates, 0, 45, 90, or 135 angles Output: A triangular mesh Conforming: A triangle

More information

Chordal Graphs: Theory and Algorithms

Chordal Graphs: Theory and Algorithms Chordal Graphs: Theory and Algorithms 1 Chordal graphs Chordal graph : Every cycle of four or more vertices has a chord in it, i.e. there is an edge between two non consecutive vertices of the cycle. Also

More information

1 The range query problem

1 The range query problem CS268: Geometric Algorithms Handout #12 Design and Analysis Original Handout #12 Stanford University Thursday, 19 May 1994 Original Lecture #12: Thursday, May 19, 1994 Topics: Range Searching with Partition

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols

From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols From Static to Dynamic Routing: Efficient Transformations of Store-and-Forward Protocols Christian Scheideler Ý Berthold Vöcking Þ Abstract We investigate how static store-and-forward routing algorithms

More information

Min st-cut Oracle for Planar Graphs with Near-Linear Preprocessing Time

Min st-cut Oracle for Planar Graphs with Near-Linear Preprocessing Time Min st-cut Oracle for Planar Graphs with Near-Linear Preprocessing Time Borradaile, G., Sankowski, P., & Wulff-Nilsen, C. (2015). Min st-cut Oracle for Planar Graphs with Near-Linear Preprocessing Time.

More information

Shortest Path on Planar Graphs with Non-negative Edge Weights

Shortest Path on Planar Graphs with Non-negative Edge Weights Shortest Path on Planar Graphs with Non-negative Edge Weights John Jeng May 11, 2014 Abstract A graph G is planar if and only if it can be drawn on a plane (or equivically a sphere) without any of its

More information

CS521 \ Notes for the Final Exam

CS521 \ Notes for the Final Exam CS521 \ Notes for final exam 1 Ariel Stolerman Asymptotic Notations: CS521 \ Notes for the Final Exam Notation Definition Limit Big-O ( ) Small-o ( ) Big- ( ) Small- ( ) Big- ( ) Notes: ( ) ( ) ( ) ( )

More information

The Encoding Complexity of Network Coding

The Encoding Complexity of Network Coding The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network

More information

Algorithms for Euclidean TSP

Algorithms for Euclidean TSP This week, paper [2] by Arora. See the slides for figures. See also http://www.cs.princeton.edu/~arora/pubs/arorageo.ps Algorithms for Introduction This lecture is about the polynomial time approximation

More information

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È.

RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. RSA (Rivest Shamir Adleman) public key cryptosystem: Key generation: Pick two large prime Ô Õ ¾ numbers È. Let Ò Ô Õ. Pick ¾ ½ ³ Òµ ½ so, that ³ Òµµ ½. Let ½ ÑÓ ³ Òµµ. Public key: Ò µ. Secret key Ò µ.

More information

Notes on Binary Dumbbell Trees

Notes on Binary Dumbbell Trees Notes on Binary Dumbbell Trees Michiel Smid March 23, 2012 Abstract Dumbbell trees were introduced in [1]. A detailed description of non-binary dumbbell trees appears in Chapter 11 of [3]. These notes

More information

FOUR EDGE-INDEPENDENT SPANNING TREES 1

FOUR EDGE-INDEPENDENT SPANNING TREES 1 FOUR EDGE-INDEPENDENT SPANNING TREES 1 Alexander Hoyer and Robin Thomas School of Mathematics Georgia Institute of Technology Atlanta, Georgia 30332-0160, USA ABSTRACT We prove an ear-decomposition theorem

More information

Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling

Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling Competitive Analysis of On-line Algorithms for On-demand Data Broadcast Scheduling Weizhen Mao Department of Computer Science The College of William and Mary Williamsburg, VA 23187-8795 USA wm@cs.wm.edu

More information

Parameterized graph separation problems

Parameterized graph separation problems Parameterized graph separation problems Dániel Marx Department of Computer Science and Information Theory, Budapest University of Technology and Economics Budapest, H-1521, Hungary, dmarx@cs.bme.hu Abstract.

More information

The Online Median Problem

The Online Median Problem The Online Median Problem Ramgopal R. Mettu C. Greg Plaxton November 1999 Abstract We introduce a natural variant of the (metric uncapacitated) -median problem that we call the online median problem. Whereas

More information

Improved Algorithms for Min Cuts and Max Flows in Undirected Planar Graphs

Improved Algorithms for Min Cuts and Max Flows in Undirected Planar Graphs Improved Algorithms for Min Cuts and Max Flows in Undirected Planar Graphs Giuseppe F. Italiano Università di Roma Tor Vergata Joint work with Yahav Nussbaum, Piotr Sankowski and Christian Wulff-Nilsen

More information

A Connection between Network Coding and. Convolutional Codes

A Connection between Network Coding and. Convolutional Codes A Connection between Network Coding and 1 Convolutional Codes Christina Fragouli, Emina Soljanin christina.fragouli@epfl.ch, emina@lucent.com Abstract The min-cut, max-flow theorem states that a source

More information

Introduction III. Graphs. Motivations I. Introduction IV

Introduction III. Graphs. Motivations I. Introduction IV Introduction I Graphs Computer Science & Engineering 235: Discrete Mathematics Christopher M. Bourke cbourke@cse.unl.edu Graph theory was introduced in the 18th century by Leonhard Euler via the Königsberg

More information

Final Exam Solutions

Final Exam Solutions Introduction to Algorithms December 14, 2010 Massachusetts Institute of Technology 6.006 Fall 2010 Professors Konstantinos Daskalakis and Patrick Jaillet Final Exam Solutions Final Exam Solutions Problem

More information

Discrete mathematics

Discrete mathematics Discrete mathematics Petr Kovář petr.kovar@vsb.cz VŠB Technical University of Ostrava DiM 470-2301/02, Winter term 2018/2019 About this file This file is meant to be a guideline for the lecturer. Many

More information

A Proposal for the Implementation of a Parallel Watershed Algorithm

A Proposal for the Implementation of a Parallel Watershed Algorithm A Proposal for the Implementation of a Parallel Watershed Algorithm A. Meijster and J.B.T.M. Roerdink University of Groningen, Institute for Mathematics and Computing Science P.O. Box 800, 9700 AV Groningen,

More information

Improved Results on Geometric Hitting Set Problems

Improved Results on Geometric Hitting Set Problems Improved Results on Geometric Hitting Set Problems Nabil H. Mustafa nabil@lums.edu.pk Saurabh Ray saurabh@cs.uni-sb.de Abstract We consider the problem of computing minimum geometric hitting sets in which,

More information

Graphs (MTAT , 4 AP / 6 ECTS) Lectures: Fri 12-14, hall 405 Exercises: Mon 14-16, hall 315 või N 12-14, aud. 405

Graphs (MTAT , 4 AP / 6 ECTS) Lectures: Fri 12-14, hall 405 Exercises: Mon 14-16, hall 315 või N 12-14, aud. 405 Graphs (MTAT.05.080, 4 AP / 6 ECTS) Lectures: Fri 12-14, hall 405 Exercises: Mon 14-16, hall 315 või N 12-14, aud. 405 homepage: http://www.ut.ee/~peeter_l/teaching/graafid08s (contains slides) For grade:

More information

Polynomial Time Approximation Schemes for the Euclidean Traveling Salesman Problem

Polynomial Time Approximation Schemes for the Euclidean Traveling Salesman Problem PROJECT FOR CS388G: ALGORITHMS: TECHNIQUES/THEORY (FALL 2015) Polynomial Time Approximation Schemes for the Euclidean Traveling Salesman Problem Shanshan Wu Vatsal Shah October 20, 2015 Abstract In this

More information

V Advanced Data Structures

V Advanced Data Structures V Advanced Data Structures B-Trees Fibonacci Heaps 18 B-Trees B-trees are similar to RBTs, but they are better at minimizing disk I/O operations Many database systems use B-trees, or variants of them,

More information

Structure and Complexity in Planning with Unary Operators

Structure and Complexity in Planning with Unary Operators Structure and Complexity in Planning with Unary Operators Carmel Domshlak and Ronen I Brafman ½ Abstract In this paper we study the complexity of STRIPS planning when operators have a single effect In

More information

SFU CMPT Lecture: Week 9

SFU CMPT Lecture: Week 9 SFU CMPT-307 2008-2 1 Lecture: Week 9 SFU CMPT-307 2008-2 Lecture: Week 9 Ján Maňuch E-mail: jmanuch@sfu.ca Lecture on July 8, 2008, 5.30pm-8.20pm SFU CMPT-307 2008-2 2 Lecture: Week 9 Binary search trees

More information

Online Scheduling for Sorting Buffers

Online Scheduling for Sorting Buffers Online Scheduling for Sorting Buffers Harald Räcke ½, Christian Sohler ½, and Matthias Westermann ¾ ½ Heinz Nixdorf Institute and Department of Mathematics and Computer Science Paderborn University, D-33102

More information

Geodesic Paths on Triangular Meshes

Geodesic Paths on Triangular Meshes Geodesic Paths on Triangular Meshes Dimas Martínez Luiz Velho Paulo Cezar Carvalho IMPA Instituto Nacional de Matemática Pura e Aplicada Estrada Dona Castorina, 110, 22460-320 Rio de Janeiro, RJ, Brasil

More information

18.3 Deleting a key from a B-tree

18.3 Deleting a key from a B-tree 18.3 Deleting a key from a B-tree B-TREE-DELETE deletes the key from the subtree rooted at We design it to guarantee that whenever it calls itself recursively on a node, the number of keys in is at least

More information

Simultaneous Optimization for Concave Costs: Single Sink Aggregation or Single Source Buy-at-Bulk

Simultaneous Optimization for Concave Costs: Single Sink Aggregation or Single Source Buy-at-Bulk Simultaneous Optimization for Concave Costs: Single Sink Aggregation or Single Source Buy-at-Bulk Ashish Goel Ý Stanford University Deborah Estrin Þ University of California, Los Angeles Abstract We consider

More information

Graphs: Introduction. Ali Shokoufandeh, Department of Computer Science, Drexel University

Graphs: Introduction. Ali Shokoufandeh, Department of Computer Science, Drexel University Graphs: Introduction Ali Shokoufandeh, Department of Computer Science, Drexel University Overview of this talk Introduction: Notations and Definitions Graphs and Modeling Algorithmic Graph Theory and Combinatorial

More information

Research Collection. An O(n^4) time algorithm to compute the bisection width of solid grid graphs. Report. ETH Library

Research Collection. An O(n^4) time algorithm to compute the bisection width of solid grid graphs. Report. ETH Library Research Collection Report An O(n^4) time algorithm to compute the bisection width of solid grid graphs Author(s): Feldmann, Andreas Emil; Widmayer, Peter Publication Date: 2011 Permanent Link: https://doi.org/10.3929/ethz-a-006935587

More information

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.

Matching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition. 18.433 Combinatorial Optimization Matching Algorithms September 9,14,16 Lecturer: Santosh Vempala Given a graph G = (V, E), a matching M is a set of edges with the property that no two of the edges have

More information

Directed Single Source Shortest Paths in Linear Average Case Time

Directed Single Source Shortest Paths in Linear Average Case Time Directed Single Source Shortest Paths in inear Average Case Time Ulrich Meyer MPI I 2001 1-002 May 2001 Author s Address ÍÐÖ ÅÝÖ ÅܹÈÐÒ¹ÁÒ ØØÙØ ĐÙÖ ÁÒÓÖÑØ ËØÙÐ ØÞÒÙ Û ½¾ ËÖÖĐÙÒ umeyer@mpi-sb.mpg.de www.uli-meyer.de

More information

Solutions for the Exam 6 January 2014

Solutions for the Exam 6 January 2014 Mastermath and LNMB Course: Discrete Optimization Solutions for the Exam 6 January 2014 Utrecht University, Educatorium, 13:30 16:30 The examination lasts 3 hours. Grading will be done before January 20,

More information

A New Algorithm for the Reconstruction of Near-Perfect Binary Phylogenetic Trees

A New Algorithm for the Reconstruction of Near-Perfect Binary Phylogenetic Trees A New Algorithm for the Reconstruction of Near-Perfect Binary Phylogenetic Trees Kedar Dhamdhere ½ ¾, Srinath Sridhar ½ ¾, Guy E. Blelloch ¾, Eran Halperin R. Ravi and Russell Schwartz March 17, 2005 CMU-CS-05-119

More information

Notes on Minimum Spanning Trees. Red Rule: Given a cycle containing no red edges, select a maximum uncolored edge on the cycle, and color it red.

Notes on Minimum Spanning Trees. Red Rule: Given a cycle containing no red edges, select a maximum uncolored edge on the cycle, and color it red. COS 521 Fall 2009 Notes on Minimum Spanning Trees 1. The Generic Greedy Algorithm The generic greedy algorithm finds a minimum spanning tree (MST) by an edge-coloring process. Initially all edges are uncolored.

More information

Algorithm Design (8) Graph Algorithms 1/2

Algorithm Design (8) Graph Algorithms 1/2 Graph Algorithm Design (8) Graph Algorithms / Graph:, : A finite set of vertices (or nodes) : A finite set of edges (or arcs or branches) each of which connect two vertices Takashi Chikayama School of

More information

22 Elementary Graph Algorithms. There are two standard ways to represent a

22 Elementary Graph Algorithms. There are two standard ways to represent a VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph

More information

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings

On the Relationships between Zero Forcing Numbers and Certain Graph Coverings On the Relationships between Zero Forcing Numbers and Certain Graph Coverings Fatemeh Alinaghipour Taklimi, Shaun Fallat 1,, Karen Meagher 2 Department of Mathematics and Statistics, University of Regina,

More information

Introduction to Algorithms Third Edition

Introduction to Algorithms Third Edition Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein Introduction to Algorithms Third Edition The MIT Press Cambridge, Massachusetts London, England Preface xiü I Foundations Introduction

More information

Response Time Analysis of Asynchronous Real-Time Systems

Response Time Analysis of Asynchronous Real-Time Systems Response Time Analysis of Asynchronous Real-Time Systems Guillem Bernat Real-Time Systems Research Group Department of Computer Science University of York York, YO10 5DD, UK Technical Report: YCS-2002-340

More information

GRAPH DECOMPOSITION BASED ON DEGREE CONSTRAINTS. March 3, 2016

GRAPH DECOMPOSITION BASED ON DEGREE CONSTRAINTS. March 3, 2016 GRAPH DECOMPOSITION BASED ON DEGREE CONSTRAINTS ZOÉ HAMEL March 3, 2016 1. Introduction Let G = (V (G), E(G)) be a graph G (loops and multiple edges not allowed) on the set of vertices V (G) and the set

More information

Applications of the Linear Matroid Parity Algorithm to Approximating Steiner Trees

Applications of the Linear Matroid Parity Algorithm to Approximating Steiner Trees Applications of the Linear Matroid Parity Algorithm to Approximating Steiner Trees Piotr Berman Martin Fürer Alexander Zelikovsky Abstract The Steiner tree problem in unweighted graphs requires to find

More information

External-Memory Breadth-First Search with Sublinear I/O

External-Memory Breadth-First Search with Sublinear I/O External-Memory Breadth-First Search with Sublinear I/O Kurt Mehlhorn and Ulrich Meyer Max-Planck-Institut für Informatik Stuhlsatzenhausweg 85, 66123 Saarbrücken, Germany. Abstract. Breadth-first search

More information

Tutorial for Algorithm s Theory Problem Set 5. January 17, 2013

Tutorial for Algorithm s Theory Problem Set 5. January 17, 2013 Tutorial for Algorithm s Theory Problem Set 5 January 17, 2013 Exercise 1: Maximum Flow Algorithms Consider the following flow network: a) Solve the maximum flow problem on the above network by using the

More information

Lecture 8 13 March, 2012

Lecture 8 13 March, 2012 6.851: Advanced Data Structures Spring 2012 Prof. Erik Demaine Lecture 8 13 March, 2012 1 From Last Lectures... In the previous lecture, we discussed the External Memory and Cache Oblivious memory models.

More information

Minimum Cut of Directed Planar Graphs in O(n log log n) Time

Minimum Cut of Directed Planar Graphs in O(n log log n) Time Minimum Cut of Directed Planar Graphs in O(n log log n) Time Shay Mozes Kirill Nikolaev Yahav Nussbaum Oren Weimann Abstract We give an O(n log log n) time algorithm for computing the minimum cut (or equivalently,

More information

Fuzzy Hamming Distance in a Content-Based Image Retrieval System

Fuzzy Hamming Distance in a Content-Based Image Retrieval System Fuzzy Hamming Distance in a Content-Based Image Retrieval System Mircea Ionescu Department of ECECS, University of Cincinnati, Cincinnati, OH 51-3, USA ionescmm@ececs.uc.edu Anca Ralescu Department of

More information

Graph Traversal. 1 Breadth First Search. Correctness. find all nodes reachable from some source node s

Graph Traversal. 1 Breadth First Search. Correctness. find all nodes reachable from some source node s 1 Graph Traversal 1 Breadth First Search visit all nodes and edges in a graph systematically gathering global information find all nodes reachable from some source node s Prove this by giving a minimum

More information

Chapter 9 Graph Algorithms

Chapter 9 Graph Algorithms Chapter 9 Graph Algorithms 2 Introduction graph theory useful in practice represent many real-life problems can be slow if not careful with data structures 3 Definitions an undirected graph G = (V, E)

More information

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions

11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions Introduction Chapter 9 Graph Algorithms graph theory useful in practice represent many real-life problems can be slow if not careful with data structures 2 Definitions an undirected graph G = (V, E) is

More information

Reachability in K 3,3 -free and K 5 -free Graphs is in Unambiguous Logspace

Reachability in K 3,3 -free and K 5 -free Graphs is in Unambiguous Logspace CHICAGO JOURNAL OF THEORETICAL COMPUTER SCIENCE 2014, Article 2, pages 1 29 http://cjtcs.cs.uchicago.edu/ Reachability in K 3,3 -free and K 5 -free Graphs is in Unambiguous Logspace Thomas Thierauf Fabian

More information

A SIMPLE APPROXIMATION ALGORITHM FOR NONOVERLAPPING LOCAL ALIGNMENTS (WEIGHTED INDEPENDENT SETS OF AXIS PARALLEL RECTANGLES)

A SIMPLE APPROXIMATION ALGORITHM FOR NONOVERLAPPING LOCAL ALIGNMENTS (WEIGHTED INDEPENDENT SETS OF AXIS PARALLEL RECTANGLES) Chapter 1 A SIMPLE APPROXIMATION ALGORITHM FOR NONOVERLAPPING LOCAL ALIGNMENTS (WEIGHTED INDEPENDENT SETS OF AXIS PARALLEL RECTANGLES) Piotr Berman Department of Computer Science & Engineering Pennsylvania

More information

On Clusterings Good, Bad and Spectral

On Clusterings Good, Bad and Spectral On Clusterings Good, Bad and Spectral Ravi Kannan Computer Science, Yale University. kannan@cs.yale.edu Santosh Vempala Ý Mathematics, M.I.T. vempala@math.mit.edu Adrian Vetta Þ Mathematics, M.I.T. avetta@math.mit.edu

More information

MOST attention in the literature of network codes has

MOST attention in the literature of network codes has 3862 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 8, AUGUST 2010 Efficient Network Code Design for Cyclic Networks Elona Erez, Member, IEEE, and Meir Feder, Fellow, IEEE Abstract This paper introduces

More information

PTAS for geometric hitting set problems via Local Search

PTAS for geometric hitting set problems via Local Search PTAS for geometric hitting set problems via Local Search Nabil H. Mustafa nabil@lums.edu.pk Saurabh Ray saurabh@cs.uni-sb.de Abstract We consider the problem of computing minimum geometric hitting sets

More information

Lecture Notes on GRAPH THEORY Tero Harju

Lecture Notes on GRAPH THEORY Tero Harju Lecture Notes on GRAPH THEORY Tero Harju Department of Mathematics University of Turku FIN-20014 Turku, Finland e-mail: harju@utu.fi 2007 Contents 1 Introduction 2 1.1 Graphs and their plane figures.........................................

More information

Flows on surface embedded graphs. Erin Chambers

Flows on surface embedded graphs. Erin Chambers Erin Wolf Chambers Why do computational geometers care about topology? Many areas, such as graphics, biology, robotics, and networking, use algorithms which compute information about point sets. Why do

More information

Orthogonal Range Search and its Relatives

Orthogonal Range Search and its Relatives Orthogonal Range Search and its Relatives Coordinate-wise dominance and minima Definition: dominates Say that point (x,y) dominates (x', y') if x

More information

Connected Components of Underlying Graphs of Halving Lines

Connected Components of Underlying Graphs of Halving Lines arxiv:1304.5658v1 [math.co] 20 Apr 2013 Connected Components of Underlying Graphs of Halving Lines Tanya Khovanova MIT November 5, 2018 Abstract Dai Yang MIT In this paper we discuss the connected components

More information

Spanning Trees and Optimization Problems (Excerpt)

Spanning Trees and Optimization Problems (Excerpt) Bang Ye Wu and Kun-Mao Chao Spanning Trees and Optimization Problems (Excerpt) CRC PRESS Boca Raton London New York Washington, D.C. Chapter 3 Shortest-Paths Trees Consider a connected, undirected network

More information

Lecture 5 February 26, 2007

Lecture 5 February 26, 2007 6.851: Advanced Data Structures Spring 2007 Prof. Erik Demaine Lecture 5 February 26, 2007 Scribe: Katherine Lai 1 Overview In the last lecture we discussed the link-cut tree: a dynamic tree that achieves

More information

Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T.

Formal Model. Figure 1: The target concept T is a subset of the concept S = [0, 1]. The search agent needs to search S for a point in T. Although this paper analyzes shaping with respect to its benefits on search problems, the reader should recognize that shaping is often intimately related to reinforcement learning. The objective in reinforcement

More information

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15

Introduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15 600.363 Introduction to Algorithms / 600.463 Algorithms I Lecturer: Michael Dinitz Topic: Shortest Paths Date: 10/13/15 14.1 Introduction Today we re going to talk about algorithms for computing shortest

More information

Faster Algorithms for Computing Distances between One-Dimensional Point Sets

Faster Algorithms for Computing Distances between One-Dimensional Point Sets Faster Algorithms for Computing Distances between One-Dimensional Point Sets Justin Colannino School of Computer Science McGill University Montréal, Québec, Canada July 8, 25 Godfried Toussaint Abstract

More information

Disjoint, Partition and Intersection Constraints for Set and Multiset Variables

Disjoint, Partition and Intersection Constraints for Set and Multiset Variables Disjoint, Partition and Intersection Constraints for Set and Multiset Variables Christian Bessiere ½, Emmanuel Hebrard ¾, Brahim Hnich ¾, and Toby Walsh ¾ ¾ ½ LIRMM, Montpelier, France. Ö Ð ÖÑÑ Ö Cork

More information

Rigidity, connectivity and graph decompositions

Rigidity, connectivity and graph decompositions First Prev Next Last Rigidity, connectivity and graph decompositions Brigitte Servatius Herman Servatius Worcester Polytechnic Institute Page 1 of 100 First Prev Next Last Page 2 of 100 We say that a framework

More information