Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret


 Gillian Owen
 1 years ago
 Views:
Transcription
1 Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely the construction of a minimum spanning tree (MST). Most of you are familiar with Prim s and Kruskal s algorithms, but they are just two of a rather large family of greedy algorithms for the MST problem. We look at two more algorithms in that family and then proceed to prove that any algorithm in that family indeed returns a minimum spanning tree. Greedy algorithms for the MST problem Greedy algorithms use only a fraction of the information available about the problem. Bottomup greedy methods build solutions piece by piece (starting from the empty set) by selecting, among the remaining pieces, that which optimizes the value of the partial solution, while ensuring that the subset selected so far can be extended into a feasible solution. Topdown greedy methods (much less common) build solutions from the full set of pieces by removing one piece at a time, selecting a piece whose removal optimizes the value of the remaining collection while ensuring that this collection continues to contain feasible solutions. Thus in both cases, and indeed for any greedy algorithm, the idea is to produce the largest immediate gain while maintaining feasibility. In the case of the MST problem, we are given an undirected graph G=(V,E), and a length (distance/weight/etc.) for each edge, d : E R. Our aim is to find a spanning tree a tree connecting all vertices of minimum total weight. MSTs have two important properties, usually called the cycle property and the cut property. The cycle property says that, for any cycle X in the graph, X E, if the weight of an edge e X is strictly larger than the weight of every other edge of X, then this edge e cannot belong to any MST of the graph. (Phrased slightly differently, for any cycle X in the graph, if the weight of an edge e X is larger than or equal to the weight of every other edge of X, then there exists an MST that does not contain e.) Recall that a cut in a graph is a partition of the vertices of the graph into two nonempty subsets, Y ={S,V S}, S V, S /0; this partition induces a set of cut edges, the cutset C Y ={u,v} u S, v V S}. In a connected graph (and we are always given a connected graph for the MST problem), there is a bijection between cuts and cutsets, so that we can specify one or the other. The cut property says that, for any cutset C in graph G, if the weight of an edge e C is strictly smaller than the weight of every other edge of C, then this edge belongs to all MSTs of this graph. (Phrased slightly differently, for any cutset C in graph G, if the weight of an edge e C Y is smaller than or equal to the weight of every other edge of C Y, then there exists an MST that contains e.) 1
2 Now a bottomup greedy method for the MST starts from an empty set and adds one piece at a time (an edge or a vertex, although the distinction is somewhat artificial), subject to not creating cycles, until a tree is built (or until there remains no candidate piece the two are equivalent). In contrast, a topdown greedy method for the MST starts from the entire graph and removes one edge at a time, subject to not disconnecting the graph, until no more edges can be removed. In each case, the choice is made on the basis of the contribution made by the chosen edge (or vertex) to the current collection, a purely local decision. A topdown approach is the ReverseDelete algorithm, first mentioned by Kruskal (but not to be confused with Kruskal s algorithm, which is, of course, a bottomup approach). The ReverseDelete algorithm starts with the original graph and deletes edges from it. The algorithm works as follows. Start with graph G, which contains a list of edges E. Sort the edges in E in decreasing order by weight, then go through the edges one by one, from largest weight down to smallest weight. For each edge in turn, check whether deleting the edge would disconnect the graph; if not, remove that edge. The proof of optimality for this algorithm is quite straightforward, using the cycleproperty. If the graph G has no cycles, then it is a tree and thus it is its own unique (and hence also optimal) spanning tree. As long as there remains a cycle in the graph, we do not have a tree and must remove at least one edge. ReverseDelete removes the remaining edge of largest weight; this edge must appear in at least one cutset, because it is part of cycle (since its removal does not disconnect the graph) and it is necessarily the edge of largest weight in any cutset in which it appears and thus the cycle property ensures that there exists at least one MST that does not include it. We can view the bottomup methods as proceeding by coalescing equivalence classes. An equivalence class consists of a set of vertices with an associated set of edges that form a minimum spanning tree for the vertices in the class. Initially, each vertex is the sole element of its equivalence class and its associated set of edges is empty. When the algorithm terminates, only one equivalence class remains and the associated set of edges defines a minimum spanning tree. At each step of the algorithm, we select an edge with an endpoint in each of two equivalence classes and coalesce these two classes, thereby combining two trees into one larger tree. (Edges with both endpoints in the same equivalence class are permanently excluded, since their selection would lead to a cycle.) In order to minimize the increase in the value of the objective function, greediness dictates that the allowable edge of least cost be chosen next. This choice can be made with or without additional constraints. At one extreme we can apply no additional constraint and always choose the shortest edge that combines two spanning trees into a larger spanning tree; at the other extreme we can designate a special equivalence class which must be involved in any merge operation. The first approach can be viewed as selecting edges; it is known as Kruskal s algorithm, after J. B. Kruskal, who first presented it in 1956; the second is best viewed (at least for programming purposes) as selecting vertices (adding them one by one to a single partial spanning tree) and is known as Prim s algorithm, after R.C. Prim, who presented it in A third, more general, approach is in fact the oldest algorithm proposed for the MST and among the oldest algorithms formally defined in Computer Science; it is due to O. Boruvka, who first published it in 1926 as a method of constructing an efficient 2
3 electrical network to serve the city of Moravia. Boruvka s algorithm considers all current equivalence classes (spanning trees for subsets of vertices) at once, joins each to its closest neighbor (where the distance is defined as the length of the shortest edge with one endpoint in each of the two equivalence classes), subject to not creating a cycle. We can prove the correctness of all three coalescencebased algorithms (Kruskal s, Prim s, and Boruvka s) at once by proving the correctness of the more general algorithm. Theorem 1. If, at each step of the algorithm, an arbitrary equivalence class, T i, is selected, but the edge selected for inclusion is the smallest that has exactly one endpoint in T i, then the final tree that results is a minimum spanning tree. Proof. As G is connected, there is always at least one allowable edge at each iteration. As each iteration can proceed, irrespective of our choice of T i, and as an iteration decreases the number of equivalence classes by one, the algorithm terminates. The proof is by induction, though we present it somewhat informally. Let T A be a spanning tree produced by the above algorithm and let T M be a minimum spanning tree. We give a procedure for transforming T M into T A. We will form a sequence of trees, each slightly different from its predecessor, with the property that the sums of the lengths of the edges in successive trees are the same. Since T A will be at the far end of the sequence of transformations, it must also be a minimum spanning tree. Label the edges of T A by the iteration on which they entered the tree. Let e i be the edge of lowest index that is present in T A but not in T M. The addition of e i to T M forms a cycle. Note that the length of e i is greater than or equal to that of every other edge in the cycle; otherwise, T M would not be a minimum spanning tree, because breaking any edge in the cycle would produce a spanning tree and breaking a longer edge, if one such existed, would reduce the total cost. Now, when e i was added to the forest of trees that eventually became T A, it connected an arbitrarily chosen tree, T i, to some other tree. Traverse the cycle in T M starting from the endpoint of e i that was in T i and going in the direction that does not lead to an immediate traversal of e i. At some point in this traversal, we first encounter an edge with exactly one endpoint in T i. It might be the first edge we encounter, or it might be many edges into the traversal, but such an edge must exist since the other endpoint of e i is not in T i. Furthermore, this edge, call it ê, cannot be e i. Now the length of e i cannot exceed that of ê, because ê was an allowable edge, but was not selected; thus the two edges have equal length. We replace ê with e i in T M : the resulting tree has the same total length. Note that ê may or may not be an edge of T A, but if it is, then its index is greater than i, so that our new tree now first differs from T A at some index greater than i. Replacing T M with this new minimum spanning tree, we continue this process until there are no differences, i.e., until T M has been transformed into T A. Iterative improvement methods Our next class of methods are those that start with a complete solution structure and proceed to refine it through successive iterations; at each iteration, an improvement is made 3
4 on a local basis. This is a more powerful approach that the greedy approach: whereas a greedy approach never alters any choice it has already made, an iterative improvement approach has no problem doing so. Moreover, the number of iterations is not fixed indeed, bounding the number of iterations is the major problem in analyzing an iterative improvement algorithm, whereas the number of steps taken by a greedy algorithm is simply the number of elements included in the solution. Matching and flow Matching and network flow are the two most important problems for which an iterative improvement method delivers optimal solutions. We first consider the maximum matching problem in bipartite graphs. Maximum bipartite matching A matching in a graph is a subset of edges that do not share any endpoint; a maximum matching is just a matching of maximum cardinality. A graph is said to be bipartite if its vertices can be partitioned into two sets in such a way that all edges of the graph will have one endpoint in one set and the other endpoint in the other set the vertices of each set form an independent set. Matching in bipartite graphs is one of the fundamental optimization problems, as it is used for assignment problems, that is, problems where one wants to find the optimal way to assign, say works crews to jobs problems to be solved everyday on construction sites, in factories, in airlines and railways, as well as in job scheduling for computing systems. We describe an iterative improvement algorithm for this problem: an algorithm that refines the current solution through local changes, using an approach that can be repeated many times, each time improving the quality of the solution. In the problem of maximum bipartite matching, in order to improve an existing matching, we must start by identifying unmatched vertices of degree at least 1 we need at least one such on each side of the bipartite graph. Consider the trivial 4vertex bipartite graph with vertex set {a, b, 1, 2} and edge set {{a,1},{a,2},{b,1}}, with current matching M ={{a,1}}. It is clear that there exists a larger matching, namely M = {{a,2},{b,1}}. Note that, in order to transform M into M, the set of matched vertices will simply gain two new members, but the set of matched edges, while larger by one, may have nothing in common with the previous set. In this trivial example, we could have started our search at vertex b, which has just one neighbor, vertex 1; but vertex 1 was already matched, so we had to unmatch it (undoing a previous decision), which makes its previous mate, vertex a, to become an unmatched vertex of degree at least 1, to follow the matched edge back to vertex a, where we found that a had an unmatched neighbor, vertex 2. We thus identified a path of three edges, the first and the last unmatched, the middle one matched; by flipping the status of each edge, from matched to unmatched and vice versa, we replaced a path with one matched edge by the same path, but with two matched edges. Let us formally define what this type of path is. 4
5 Let G = (V,E) be a graph and M be a matching. An alternating path with respect to M is a path such that such that every other edge on the path is in M, while the others are in E M. If, in addition, the path is of odd length and the first and last vertices on the path are unmatched, then the alternating path is called an augmenting path. The reason it is called an augmenting path is that we can use is to augment the size of the matching: whereas an alternating path may have the same number of edges in M and in E M, or one more in M, or one more in E M, an augmenting path must have one more edge in M than in E M. Moreover, because of the definition of matchings, it is safe to flip the status of every edge in an augmenting path from matched to unmatched, and vice versa: none of the vertices on the augmenting path can have been the endpoint of a matched edge other than those already on the path. Augmenting paths are thus the tool we needed to design an iterative improvement algorithm: in general terms, we start with an arbitrary matching (including possibly an empty one), then we search for an augmenting path in the graph; if one is found, we augment the matching by flipping the status of all edges along the augmenting path; if none is found, we stop. The obvious question, at this point, is whether the absence of any augmenting path indicates just a local maximum or a global one. The answer is positive: if G has no augmenting path with respect to M, then M is a maximum matching it is optimal. We phrase this result positively. Theorem 2. Let G be a graph, M an optimal matching for G, and M any matching for G such that we have M < M. Then G has an augmenting path with respect to M. This result is due to French mathematician Claude Berge and so known as Berge s theorem. The proof is deceptively simple, but note that it is nonconstructive. Proof. Let M M denote the symmetric difference of M and M, i.e., M M = (M M ) (M M ), and consider the subgraph G = (V,M M ). All vertices of G have degree two or less, because they have at most one incident edge from each of M and M ; moreover, every connected component of G is one of: (i) a single vertex; (ii) a cycle of even length, with edges drawn alternately from M and M ; or (iii) a path with edges drawn alternately from M and M. As the cardinality of M exceeds that of M, there exists at least one path composed of alternating edges from M and M, with more edges from M than from M. The path must begin and end with edges from M and the endpoints are unmatched in M, because the path is a connected component of G ; hence this path is an augmenting path. Berge s theorem shows that the use of augmenting paths not only enables us to improve on the quality of an initial solution, it enables us to obtain an optimal solution. Note that the definitions of alternating paths and augmenting paths hold just as well for nonbipartite graphs as for bipartite ones; and Berge s theorem does too. The next step is to develop an algorithm for finding augmenting path and this is where the difference between bipartite and nonbipartite graphs will show. 5
6 In a bipartite graph, any augmenting path begins on one side of the graph and ends on the other. Thus a search algorithm can simply start at any unmatched vertex on one side of the graph, say the left side, and traverse any edge to the other side. If the endpoint on the right side is also unmatched, then an augmenting path, consisting of a single unmatched edge, has been found. If the other endpoint is matched, then the algorithm traverses that matched edge to the left side and follows any unmatched edge, if one exists, to an unvisited vertex on the right side. The process is repeated until either an augmenting path is found or a deadend on the left side is reached. Unmatched edges are always traversed from the left side to the right side and matched edges in the opposite direction. If a deadend is reached, we must explore other paths until we find an augmenting path or run out of possibilities. In developing an augmenting path, choices arise in only two places: in selecting an initial unmatched vertex and in selecting an unmatched edge out of a vertex on the left side. In order to examine all possibilities for augmenting paths, we need to explore these choices in some systematic way; because all augmenting paths make exactly the same contribution of one additional matched edge, we should search for the shortest augmenting paths. Thus we use a breadthfirst search of the graph, starting at each unmatched vertex on the left side. If any of the current active vertices (the frontier in the BFS, which will always be vertices on the left side) has an unmatched neighbor on the right, we are done. Otherwise, from each neighbor on the right, we follow the matched edge of which it is an endpoint back to a vertex on the left and repeat the process. Thus the BFS increases path lengths by 2 at each iteration because the move back to the left along matched edges is forced. The BFS takes O( E ) time, as it cannot look at an edge more than twice (once from each end); as the number of augmenting paths we may find is in O( V ), the running time of this BFS augmenting strategy is O( V E ). Since the input size is Θ( V + E ), the time taken is more than linear, but no more than quadratic, in the size of the input. However, we are wasting a lot of time: each new BFS starts from scratch and, most likely, will follow many paths already followed in the previous BFS. And each BFS produces a single new augmenting path. Yet, there typically will be a number of augmenting paths in a graph with respect to a matching, especially if that matching is small. Instead of stopping at the first unmatched neighbor on the right, we could finish that stage of BFS, collecting all unmatched neighbors on the right. Doing so would not increase the worstcase running time of the BFS, yet might yield multiple augmenting paths of the same length. However, we can use multiple augmenting paths only if they are vertexdisjoint, since otherwise we could cause conflicting assignments of vertices or even edges. The BFS might discover that k l of the current active vertices (on the left) have an unmatched neighbor on the right, but some of these neighbors might be shared; if there are k r unmatched neighbors on the right, the maximum number of disjoint augmenting paths is min{k l,k r }. The number may be smaller, however, because this sharing of vertices can occur at any stage along the alternating paths. Thus we must adjust our BFS to provide backpointers, so that we can retrace paths from rightside unmatched vertices reached in the search; and we must add a backtracing phase, which retraces at most one path for each unmatched vertex reached on the righthand side. The backtracing is itself a graph search. Specifically, for each leftside vertex encountered during the breadthfirst search, we record its distance from the clos 6
7 est unmatched leftside vertex, passing as before through matched rightside vertices. We use this information to run a (backward) depthfirst search from each unmatched rightside vertex discovered during the BFS: during a DFS we consider only edges that take us one level closer to unmatched leftside vertices. When we discover an augmenting path, we eliminate the vertices along this path from consideration by any remaining DFS, thereby ensuring that our augmenting paths will be vertexdisjoint. We can hope that the number of augmenting paths found during each search is more than a constant, so that the number of searches (iterations) to be run is significantly decreased, preferably to o( V ). We characterize the gain to be realized through a series of small theorems; these theorems apply equally to general graphs and bipartite graphs. We begin with a more precise proof of Berge s theorem that allows us to refine its conclusion. Theorem 3. Let M 1 and M 2 be two matchings in some graph, G = (V,E), with M 1 > M 2. Then the subgraph G =(V,M 1 M 2 ) contains at least M 1 M 2 vertexdisjoint augmenting paths with respect to M 2. Proof. Recall that every connected component of G is one of: (i) a single vertex; (ii) a cycle of even length, with edges alternately drawn from M 1 and M 2 ; or (iii) a path with edges alternately drawn from M 1 and M 2. Let C i =(V i,e i ) be the ith connected component and define δ(c i ) = E i M 1 E i M 2. From our previous observations, we know that δ(c i ) must be one of 1, 0, or 1 and that it equals 1 exactly when C i is an augmenting path with respect to M 2. Now we have δ(c i )= M 1 M 2 M 2 M 1 = M 1 M 2, i so that at least M 1 M 2 components C i are such that δ(c i ) equals 1, which proves the theorem. This tells us that many disjoint augmenting paths exist, but says nothing about their lengths, nor about finding them. Indeed, if we take M 2 to be the empty set and M 1 to be a maximum matching, the theorem tells us that the original graph contains enough disjoint augmenting paths to go from no matching at all to a maximum matching in a single step! But these paths will normally be of various lengths and finding such a set is actually a very hard problem. We will focus on finding a set of disjoint shortest augmenting paths (thus all of the same length) with respect to the current matching; such a set will normally not contain enough paths to obtain a maximum matching in one step. Our next result is intuitively obvious, but the theorem proves it and also makes it precise: successive shortest augmenting paths cannot become shorter. Theorem 4. Let G =(V, E) be a graph, with M a nonmaximal matching, P a shortest augmenting path with respect to M, and P any augmenting path with respect to the augmented matching M P. Then we have P P + P P 7
8 Proof. The matching M P P contains two more edges than M, so that, by our previous theorem, M (M P P ) = P P contains (at least) two vertexdisjoint augmenting paths with respect to M, call them P 1 and P 2. Thus we have P P P 1 + P 2. Since P is a shortest augmenting path with respect to M, we also have P P 1 and P P 2, so that we get P P 2 P. Since P P is (P P ) (P P ), we can write P P = P + P P P. Substituting in our preceding inequality yields our conclusion. An interesting corollary (especially given our BFS approach to finding shortest augmenting paths) is that two successive shortest augmenting paths have the same length only if they are disjoint. Our new algorithm uses all disjoint shortest augmenting paths it finds, as follows. Begin with an arbitrary (possibly empty) matching. Repeatedly find a maximal set of vertexdisjoint shortest augmenting paths, and use them all to augment the current matching, until no augmenting path can be found. Now we are ready to prove the crucial result on the worstcase number of searches required to obtain a maximum matching. The result itself is on the number of different lengths that can be found among the collection of shortest augmenting paths produced in successive searches. Theorem 5. Let s be the cardinality of a maximum matching and let P 1,P 2,..., P s be a sequence of shortest augmenting paths that build on the empty matching. Then the number of distinct integers in the sequence P 1, P 2,..., P s cannot exceed 2 s. The intuition here is that, as we start the first search with an empty matching (or a small one), there will be many disjoint shortest augmenting paths and so there will be many repeated values towards the beginning of the sequence of path lengths; toward the end, however, augmenting paths are more complex, longer, and rarer, so that most values toward the end of the sequence will be distinct. The proof formalizes this intuition by using a midpoint in the number of distinct values that is very far along the sequence of augmenting paths: not at s 2, but at s s. Proof. Let r= s s and consider M r, the rth matching in the augmentation sequence. Since M r =r and since the maximum matching has cardinality s>r, we conclude (using Berge s extended theorem) that there exist exactly s r vertexdisjoint augmenting paths with respect to M r. (These need not be the remaining augmenting paths in our sequence, P r+1, P r+2,..., P s.) Altogether these paths contain at most all of the edges from M r, so that the shortest contains at most r/(s r) such edges (if the edges of M r are evenly distributed among the s r vertexdisjoint paths) and thus at most 2 r/(s r) +1 edges in all. But the shortest augmenting path is precisely the next one picked, so that we get P r+1 2 s s /(s s sqrts ) + 1 2(s s)/ s+1 2 s 1 < 2 s +1. 8
9 Since P r+1 is an odd integer (all augmenting paths have odd length), we can conclude that P r+1 2 s 1. Hence each of P 1, P 2,..., P r must have length no greater than 2 s 1. Therefore, these r lengths must be distributed among at most s different values and this bound can be reached only if P r = P r+1. Since P r+1, P r+2,..., P s cannot contribute more than s r = s distinct values, the total number of distinct integers in the sequence does not exceed 2 s. Thus our improved algorithm iterates Θ( V ) times, as opposed to Θ( V ) times for the original version, a substantial improvement since the worstcase cost of an iteration remains unchanged. For bipartite graphs, we can construct a maximum matching in O( V E ) time. 9
MinimumSpanningTree problem. Minimum Spanning Trees (Forests) MinimumSpanningTree problem
Minimum Spanning Trees (Forests) Given an undirected graph G=(V,E) with each edge e having a weight w(e) : Find a subgraph T of G of minimum total weight s.t. every pair of vertices connected in G are
More information2. Lecture notes on nonbipartite matching
Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 15th, 013. Lecture notes on nonbipartite matching Given a graph G = (V, E), we are interested in finding
More informationUndirected Graphs. DSA  lecture 6  T.U.ClujNapoca  M. Joldos 1
Undirected Graphs Terminology. Free Trees. Representations. Minimum Spanning Trees (algorithms: Prim, Kruskal). Graph Traversals (dfs, bfs). Articulation points & Biconnected Components. Graph Matching
More informationGreedy Algorithms. At each step in the algorithm, one of several choices can be made.
Greedy Algorithms At each step in the algorithm, one of several choices can be made. Greedy Strategy: make the choice that is the best at the moment. After making a choice, we are left with one subproblem
More informationDecreasing a key FIBHEAPDECREASEKEY(,, ) 3.. NIL. 2. error new key is greater than current key 6. CASCADINGCUT(, )
Decreasing a key FIBHEAPDECREASEKEY(,, ) 1. if >. 2. error new key is greater than current key 3.. 4.. 5. if NIL and.
More informationAdvanced algorithms. topological ordering, minimum spanning tree, UnionFind problem. Jiří Vyskočil, Radek Mařík 2012
topological ordering, minimum spanning tree, UnionFind problem Jiří Vyskočil, Radek Mařík 2012 Subgraph subgraph A graph H is a subgraph of a graph G, if the following two inclusions are satisfied: 2
More informationDefinition For vertices u, v V (G), the distance from u to v, denoted d(u, v), in G is the length of a shortest u, vpath. 1
Graph fundamentals Bipartite graph characterization Lemma. If a graph contains an odd closed walk, then it contains an odd cycle. Proof strategy: Consider a shortest closed odd walk W. If W is not a cycle,
More informationGreedy algorithms is another useful way for solving optimization problems.
Greedy Algorithms Greedy algorithms is another useful way for solving optimization problems. Optimization Problems For the given input, we are seeking solutions that must satisfy certain conditions. These
More informationGreedy Algorithms. Previous Examples: Huffman coding, Minimum Spanning Tree Algorithms
Greedy Algorithms A greedy algorithm is one where you take the step that seems the best at the time while executing the algorithm. Previous Examples: Huffman coding, Minimum Spanning Tree Algorithms Coin
More informationGraph Connectivity G G G
Graph Connectivity 1 Introduction We have seen that trees are minimally connected graphs, i.e., deleting any edge of the tree gives us a disconnected graph. What makes trees so susceptible to edge deletions?
More informationBasic Graph Theory with Applications to Economics
Basic Graph Theory with Applications to Economics Debasis Mishra February, 0 What is a Graph? Let N = {,..., n} be a finite set. Let E be a collection of ordered or unordered pairs of distinct elements
More informationMinimum Spanning Trees Ch 23 Traversing graphs
Next: Graph Algorithms Graphs Ch 22 Graph representations adjacency list adjacency matrix Minimum Spanning Trees Ch 23 Traversing graphs BreadthFirst Search DepthFirst Search 11/30/17 CSE 3101 1 Graphs
More information5 MST and Greedy Algorithms
5 MST and Greedy Algorithms One of the traditional and practically motivated problems of discrete optimization asks for a minimal interconnection of a given set of terminals (meaning that every pair will
More informationRigidity, connectivity and graph decompositions
First Prev Next Last Rigidity, connectivity and graph decompositions Brigitte Servatius Herman Servatius Worcester Polytechnic Institute Page 1 of 100 First Prev Next Last Page 2 of 100 We say that a framework
More informationLecture 8: The Traveling Salesman Problem
Lecture 8: The Traveling Salesman Problem Let G = (V, E) be an undirected graph. A Hamiltonian cycle of G is a cycle that visits every vertex v V exactly once. Instead of Hamiltonian cycle, we sometimes
More informationThe minimum spanning tree problem
The minimum spanning tree problem MST is a minimum cost connection problem on graphs The graph can model the connection in a (hydraulic, electric, telecommunication) network: the nodes are the points that
More informationCopyright 2007 Pearson AddisonWesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.
Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible
More informationSolution for Homework set 3
TTIC 300 and CMSC 37000 Algorithms Winter 07 Solution for Homework set 3 Question (0 points) We are given a directed graph G = (V, E), with two special vertices s and t, and nonnegative integral capacities
More informationTheorem 3.1 (Berge) A matching M in G is maximum if and only if there is no M augmenting path.
3 Matchings Hall s Theorem Matching: A matching in G is a subset M E(G) so that no edge in M is a loop, and no two edges in M are incident with a common vertex. A matching M is maximal if there is no matching
More informationDr. Amotz BarNoy s Compendium of Algorithms Problems. Problems, Hints, and Solutions
Dr. Amotz BarNoy s Compendium of Algorithms Problems Problems, Hints, and Solutions Chapter 1 Searching and Sorting Problems 1 1.1 Array with One Missing 1.1.1 Problem Let A = A[1],..., A[n] be an array
More informationCS261: Problem Set #2
CS261: Problem Set #2 Due by 11:59 PM on Tuesday, February 9, 2016 Instructions: (1) Form a group of 13 students. You should turn in only one writeup for your entire group. (2) Submission instructions:
More information2 A Template for Minimum Spanning Tree Algorithms
CS, Lecture 5 Minimum Spanning Trees Scribe: Logan Short (05), William Chen (0), Mary Wootters (0) Date: May, 0 Introduction Today we will continue our discussion of greedy algorithms, specifically in
More informationProblem Set 2 Solutions
Problem Set 2 Solutions Graph Theory 2016 EPFL Frank de Zeeuw & Claudiu Valculescu 1. Prove that the following statements about a graph G are equivalent.  G is a tree;  G is minimally connected (it is
More informationNotes for Recitation 9
6.042/18.062J Mathematics for Computer Science October 8, 2010 Tom Leighton and Marten van Dijk Notes for Recitation 9 1 Traveling Salesperson Problem Now we re going to talk about a famous optimization
More informationThe Encoding Complexity of Network Coding
The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network
More informationTopic: Local Search: MaxCut, Facility Location Date: 2/13/2007
CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: MaxCut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be
More informationWUCT121. Discrete Mathematics. Graphs
WUCT121 Discrete Mathematics Graphs WUCT121 Graphs 1 Section 1. Graphs 1.1. Introduction Graphs are used in many fields that require analysis of routes between locations. These areas include communications,
More information3 NoWait Job Shops with Variable Processing Times
3 NoWait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical nowait job shop setting, we are given a set of processing times for each operation. We may select
More information1 Matchings with Tutte s Theorem
1 Matchings with Tutte s Theorem Last week we saw a fairly strong necessary criterion for a graph to have a perfect matching. Today we see that this condition is in fact sufficient. Theorem 1 (Tutte, 47).
More information1 Variations of the Traveling Salesman Problem
Stanford University CS26: Optimization Handout 3 Luca Trevisan January, 20 Lecture 3 In which we prove the equivalence of three versions of the Traveling Salesman Problem, we provide a 2approximate algorithm,
More information11.1 Facility Location
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local
More information18 Spanning Tree Algorithms
November 14, 2017 18 Spanning Tree Algorithms William T. Trotter trotter@math.gatech.edu A Networking Problem Problem The vertices represent 8 regional data centers which need to be connected with highspeed
More informationAnnouncements Problem Set 5 is out (today)!
CSC263 Week 10 Announcements Problem Set is out (today)! Due Tuesday (Dec 1) Minimum Spanning Trees The Graph of interest today A connected undirected weighted graph G = (V, E) with weights w(e) for each
More informationLecture notes on: Maximum matching in bipartite and nonbipartite graphs (Draft)
Lecture notes on: Maximum matching in bipartite and nonbipartite graphs (Draft) Lecturer: Uri Zwick June 5, 2013 1 The maximum matching problem Let G = (V, E) be an undirected graph. A set M E is a matching
More informationA graph is finite if its vertex set and edge set are finite. We call a graph with just one vertex trivial and all other graphs nontrivial.
2301670 Graph theory 1.1 What is a graph? 1 st semester 2550 1 1.1. What is a graph? 1.1.2. Definition. A graph G is a triple (V(G), E(G), ψ G ) consisting of V(G) of vertices, a set E(G), disjoint from
More informationLecture 4: Primal Dual Matching Algorithm and NonBipartite Matching. 1 Primal/Dual Algorithm for weighted matchings in Bipartite Graphs
CMPUT 675: Topics in Algorithms and Combinatorial Optimization (Fall 009) Lecture 4: Primal Dual Matching Algorithm and NonBipartite Matching Lecturer: Mohammad R. Salavatipour Date: Sept 15 and 17, 009
More informationCPSC 536N: Randomized Algorithms Term 2. Lecture 10
CPSC 536N: Randomized Algorithms 0111 Term Prof. Nick Harvey Lecture 10 University of British Columbia In the first lecture we discussed the Max Cut problem, which is NPcomplete, and we presented a very
More informationNetwork flows and Menger s theorem
Network flows and Menger s theorem Recall... Theorem (max flow, min cut strong duality). Let G be a network. The maximum value of a flow equals the minimum capacity of a cut. We prove this strong duality
More informationDijkstra s algorithm for shortest paths when no edges have negative weight.
Lecture 14 Graph Algorithms II 14.1 Overview In this lecture we begin with one more algorithm for the shortest path problem, Dijkstra s algorithm. We then will see how the basic approach of this algorithm
More information1 Non greedy algorithms (which we should have covered
1 Non greedy algorithms (which we should have covered earlier) 1.1 Floyd Warshall algorithm This algorithm solves the allpairs shortest paths problem, which is a problem where we want to find the shortest
More informationNetwork optimization: An overview
Network optimization: An overview Mathias Johanson Alkit Communications 1 Introduction Various kinds of network optimization problems appear in many fields of work, including telecommunication systems,
More informationMinimum Spanning Trees
CS124 Lecture 5 Spring 2011 Minimum Spanning Trees A tree is an undirected graph which is connected and acyclic. It is easy to show that if graph G(V,E) that satisfies any two of the following properties
More informationCS261: A Second Course in Algorithms Lecture #3: The PushRelabel Algorithm for Maximum Flow
CS26: A Second Course in Algorithms Lecture #3: The PushRelabel Algorithm for Maximum Flow Tim Roughgarden January 2, 206 Motivation The maximum flow algorithms that we ve studied so far are augmenting
More informationModule 6 P, NP, NPComplete Problems and Approximation Algorithms
Module 6 P, NP, NPComplete Problems and Approximation Algorithms Dr. Natarajan Meghanathan Associate Professor of Computer Science Jackson State University Jackson, MS 39217 Email: natarajan.meghanathan@jsums.edu
More information1 Undirected Vertex Geography UVG
Geography Start with a chip sitting on a vertex v of a graph or digraph G. A move consists of moving the chip to a neighbouring vertex. In edge geography, moving the chip from x to y deletes the edge (x,
More informationModule 5 Graph Algorithms
Module 5 Graph lgorithms Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 97 Email: natarajan.meghanathan@jsums.edu 5. Graph Traversal lgorithms Depth First
More information3.1 Basic Definitions and Applications
Graphs hapter hapter Graphs. Basic efinitions and Applications Graph. G = (V, ) n V = nodes. n = edges between pairs of nodes. n aptures pairwise relationship between objects: Undirected graph represents
More informationGraph Theory II. PoShen Loh. June edges each. Solution: Spread the n vertices around a circle. Take parallel classes.
Graph Theory II PoShen Loh June 009 1 Warmup 1. Let n be odd. Partition the edge set of K n into n matchings with n 1 edges each. Solution: Spread the n vertices around a circle. Take parallel classes..
More informationDesign and Analysis of Algorithms
CSE 101, Winter 018 D/Q Greed SP s DP LP, Flow B&B, Backtrack Metaheuristics P, NP Design and Analysis of Algorithms Lecture 8: Greed Class URL: http://vlsicad.ucsd.edu/courses/cse101w18/ Optimization
More informationUNIT 5 GRAPH. Application of Graph Structure in real world: Graph Terminologies:
UNIT 5 CSE 103  Unit V Graph GRAPH Graph is another important nonlinear data structure. In tree Structure, there is a hierarchical relationship between, parent and children that is onetomany relationship.
More informationSpanning Trees, greedy algorithms. Lecture 22 CS2110 Fall 2017
1 Spanning Trees, greedy algorithms Lecture 22 CS2110 Fall 2017 1 We demo A8 Your space ship is on earth, and you hear a distress signal from a distance Planet X. Your job: 1. Rescue stage: Fly your ship
More informationU.C. Berkeley CS170 : Algorithms, Fall 2013 Midterm 1 Professor: Satish Rao October 10, Midterm 1 Solutions
U.C. Berkeley CS170 : Algorithms, Fall 2013 Midterm 1 Professor: Satish Rao October 10, 2013 Midterm 1 Solutions 1 True/False 1. The Mayan base 20 system produces representations of size that is asymptotically
More informationDiscrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note 8
CS 70 Discrete Mathematics and Probability Theory Fall 2009 Satish Rao,David Tse Note 8 An Introduction to Graphs Formulating a simple, precise specification of a computational problem is often a prerequisite
More informationMaximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube
Maximal Monochromatic Geodesics in an Antipodal Coloring of Hypercube Kavish Gandhi April 4, 2015 Abstract A geodesic in the hypercube is the shortest possible path between two vertices. Leader and Long
More informationSteiner Trees and Forests
Massachusetts Institute of Technology Lecturer: Adriana Lopez 18.434: Seminar in Theoretical Computer Science March 7, 2006 Steiner Trees and Forests 1 Steiner Tree Problem Given an undirected graph G
More informationDesign and Analysis of Algorithms
Design and Analysis of Algorithms CSE 5311 Lecture 18 Graph Algorithm Junzhou Huang, Ph.D. Department of Computer Science and Engineering CSE5311 Design and Analysis of Algorithms 1 Graphs Graph G = (V,
More informationLecture Notes for Chapter 23: Minimum Spanning Trees
Lecture Notes for Chapter 23: Minimum Spanning Trees Chapter 23 overview Problem A town has a set of houses and a set of roads. A road connects 2 and only 2 houses. A road connecting houses u and v has
More informationThe External Network Problem
The External Network Problem Jan van den Heuvel and Matthew Johnson CDAM Research Report LSECDAM200415 December 2004 Abstract The connectivity of a communications network can often be enhanced if the
More information2. CONNECTIVITY Connectivity
2. CONNECTIVITY 70 2. Connectivity 2.1. Connectivity. Definition 2.1.1. (1) A path in a graph G = (V, E) is a sequence of vertices v 0, v 1, v 2,..., v n such that {v i 1, v i } is an edge of G for i =
More informationSpanning Trees 4/19/17. Prelim 2, assignments. Undirected trees
/9/7 Prelim, assignments Prelim is Tuesday. See the course webpage for details. Scope: up to but not including today s lecture. See the review guide for details. Deadline for submitting conflicts has passed.
More informationSpanning Trees. Lecture 22 CS2110 Spring 2017
1 Spanning Trees Lecture 22 CS2110 Spring 2017 1 Prelim 2, assignments Prelim 2 is Tuesday. See the course webpage for details. Scope: up to but not including today s lecture. See the review guide for
More informationLecture Notes on Graph Theory
Lecture Notes on Graph Theory Vadim Lozin 1 Introductory concepts A graph G = (V, E) consists of two finite sets V and E. The elements of V are called the vertices and the elements of E the edges of G.
More informationCopyright 2000, Kevin Wayne 1
Chapter 3  Graphs Undirected Graphs Undirected graph. G = (V, E) V = nodes. E = edges between pairs of nodes. Captures pairwise relationship between objects. Graph size parameters: n = V, m = E. Directed
More informationGraph Theory. Part of Texas Counties.
Graph Theory Part of Texas Counties. We would like to visit each of the above counties, crossing each county only once, starting from Harris county. Is this possible? This problem can be modeled as a graph.
More informationCOP 4531 Complexity & Analysis of Data Structures & Algorithms
COP 4531 Complexity & Analysis of Data Structures & Algorithms Lecture 9 Minimum Spanning Trees Thanks to the text authors who contributed to these slides Why Minimum Spanning Trees (MST)? Example 1 A
More informationG205 Fundamentals of Computer Engineering. CLASS 21, Mon. Nov Stefano Basagni Fall 2004 MW, 1:30pm3:10pm
G205 Fundamentals of Computer Engineering CLASS 21, Mon. Nov. 22 2004 Stefano Basagni Fall 2004 MW, 1:30pm3:10pm Greedy Algorithms, 1 Algorithms for Optimization Problems Sequence of steps Choices at
More informationLecture 8: PATHS, CYCLES AND CONNECTEDNESS
Discrete Mathematics August 20, 2014 Lecture 8: PATHS, CYCLES AND CONNECTEDNESS Instructor: Sushmita Ruj Scribe: Ishan Sahu & Arnab Biswas 1 Paths, Cycles and Connectedness 1.1 Paths and Cycles 1. Paths
More informationFast algorithms for max independent set
Fast algorithms for max independent set N. Bourgeois 1 B. Escoffier 1 V. Th. Paschos 1 J.M.M. van Rooij 2 1 LAMSADE, CNRS and Université ParisDauphine, France {bourgeois,escoffier,paschos}@lamsade.dauphine.fr
More information/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang
600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization
More informationFinite Termination of Augmenting Path Algorithms in the Presence of Irrational Problem Data
Finite Termination of Augmenting Path Algorithms in the Presence of Irrational Problem Data Brian C. Dean Michel X. Goemans Nicole Immorlica June 28, 2006 Abstract This paper considers two similar graph
More information15451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015
15451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015 While we have good algorithms for many optimization problems, the previous lecture showed that many
More informationGraphs and trees come up everywhere. We can view the internet as a graph (in many ways) Web search views web pages as a graph
Graphs and Trees Graphs and trees come up everywhere. We can view the internet as a graph (in many ways) who is connected to whom Web search views web pages as a graph Who points to whom Niche graphs (Ecology):
More informationChapter 14. Graphs Pearson AddisonWesley. All rights reserved 14 A1
Chapter 14 Graphs 2011 Pearson AddisonWesley. All rights reserved 14 A1 Terminology G = {V, E} A graph G consists of two sets A set V of vertices, or nodes A set E of edges A subgraph Consists of a subset
More informationLecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017
COMPSCI 632: Approximation Algorithms August 30, 2017 Lecturer: Debmalya Panigrahi Lecture 2 Scribe: Nat Kell 1 Introduction In this lecture, we examine a variety of problems for which we give greedy approximation
More informationSolutions to relevant spring 2000 exam problems
Problem 2, exam Here s Prim s algorithm, modified slightly to use C syntax. MSTPrim (G, w, r): Q = V[G]; for (each u Q) { key[u] = ; key[r] = 0; π[r] = 0; while (Q not empty) { u = ExtractMin (Q); for
More informationSuperconcentrators of depth 2 and 3; odd levels help (rarely)
Superconcentrators of depth 2 and 3; odd levels help (rarely) Noga Alon Bellcore, Morristown, NJ, 07960, USA and Department of Mathematics Raymond and Beverly Sackler Faculty of Exact Sciences Tel Aviv
More informationApproximation Algorithms: The PrimalDual Method. My T. Thai
Approximation Algorithms: The PrimalDual Method My T. Thai 1 Overview of the PrimalDual Method Consider the following primal program, called P: min st n c j x j j=1 n a ij x j b i j=1 x j 0 Then the
More informationA NOTE ON THE NUMBER OF DOMINATING SETS OF A GRAPH
A NOTE ON THE NUMBER OF DOMINATING SETS OF A GRAPH STEPHAN WAGNER Abstract. In a recent article by Bród and Skupień, sharp upper and lower bounds for the number of dominating sets in a tree were determined.
More informationGraphs and Network Flows ISE 411. Lecture 7. Dr. Ted Ralphs
Graphs and Network Flows ISE 411 Lecture 7 Dr. Ted Ralphs ISE 411 Lecture 7 1 References for Today s Lecture Required reading Chapter 20 References AMO Chapter 13 CLRS Chapter 23 ISE 411 Lecture 7 2 Minimum
More informationP Is Not Equal to NP. ScholarlyCommons. University of Pennsylvania. Jon Freeman University of Pennsylvania. October 1989
University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1989 P Is Not Equal to NP Jon Freeman University of Pennsylvania Follow this and
More informationarxiv: v3 [cs.dm] 12 Jun 2014
On Maximum Differential Coloring of Planar Graphs M. A. Bekos 1, M. Kaufmann 1, S. Kobourov, S. Veeramoni 1 WilhelmSchickardInstitut für Informatik  Universität Tübingen, Germany Department of Computer
More informationCSC 373: Algorithm Design and Analysis Lecture 4
CSC 373: Algorithm Design and Analysis Lecture 4 Allan Borodin January 14, 2013 1 / 16 Lecture 4: Outline (for this lecture and next lecture) Some concluding comments on optimality of EST Greedy Interval
More informationSources for this lecture. 3. Matching in bipartite and general graphs. Symmetric difference
S72.2420 / T79.5203 Matching in bipartite and general graphs 1 3. Matching in bipartite and general graphs Let G be a graph. A matching M in G is a set of nonloop edges with no shared endpoints. Let
More informationGreedy Algorithms and Matroids. Andreas Klappenecker
Greedy Algorithms and Matroids Andreas Klappenecker Greedy Algorithms A greedy algorithm solves an optimization problem by working in several phases. In each phase, a decision is made that is locally optimal
More informationExtremal results for Bergehypergraphs
Extremal results for Bergehypergraphs Dániel Gerbner Cory Palmer Abstract Let G be a graph and H be a hypergraph both on the same vertex set. We say that a hypergraph H is a BergeG if there is a bijection
More informationList of Theorems. Mat 416, Introduction to Graph Theory. Theorem 1 The numbers R(p, q) exist and for p, q 2,
List of Theorems Mat 416, Introduction to Graph Theory 1. Ramsey s Theorem for graphs 8.3.11. Theorem 1 The numbers R(p, q) exist and for p, q 2, R(p, q) R(p 1, q) + R(p, q 1). If both summands on the
More information1. a graph G = (V (G), E(G)) consists of a set V (G) of vertices, and a set E(G) of edges (edges are pairs of elements of V (G))
10 Graphs 10.1 Graphs and Graph Models 1. a graph G = (V (G), E(G)) consists of a set V (G) of vertices, and a set E(G) of edges (edges are pairs of elements of V (G)) 2. an edge is present, say e = {u,
More informationTwo Characterizations of Hypercubes
Two Characterizations of Hypercubes Juhani Nieminen, Matti Peltola and Pasi Ruotsalainen Department of Mathematics, University of Oulu University of Oulu, Faculty of Technology, Mathematics Division, P.O.
More informationVertex Cover Approximations
CS124 Lecture 20 Heuristics can be useful in practice, but sometimes we would like to have guarantees. Approximation algorithms give guarantees. It is worth keeping in mind that sometimes approximation
More informationThe Geometry of Carpentry and Joinery
The Geometry of Carpentry and Joinery Pat Morin and Jason Morrison School of Computer Science, Carleton University, 115 Colonel By Drive Ottawa, Ontario, CANADA K1S 5B6 Abstract In this paper we propose
More information11.1. Definitions. 11. Domination in Graphs
11. Domination in Graphs Some definitions Minimal dominating sets Bounds for the domination number. The independent domination number Other domination parameters. 11.1. Definitions A vertex v in a graph
More informationMinimum Spanning Tree
Minimum Spanning Tree 1 Minimum Spanning Tree G=(V,E) is an undirected graph, where V is a set of nodes and E is a set of possible interconnections between pairs of nodes. For each edge (u,v) in E, we
More informationSkew propagation time
Graduate Theses and Dissertations Graduate College 015 Skew propagation time Nicole F. Kingsley Iowa State University Follow this and additional works at: http://lib.dr.iastate.edu/etd Part of the Applied
More informationPaths, Trees, and Flowers by Jack Edmonds
Paths, Trees, and Flowers by Jack Edmonds Xiang Gao ETH Zurich Distributed Computing Group www.disco.ethz.ch Introduction and background Edmonds maximum matching algorithm Matchingduality theorem Matching
More informationAssignment and Matching
Assignment and Matching By Geetika Rana IE 680 Dept of Industrial Engineering 1 Contents Introduction Bipartite Cardinality Matching Problem Bipartite Weighted Matching Problem Stable Marriage Problem
More information15.4 Longest common subsequence
15.4 Longest common subsequence Biological applications often need to compare the DNA of two (or more) different organisms A strand of DNA consists of a string of molecules called bases, where the possible
More informationPartitioning Orthogonal Polygons by Extension of All Edges Incident to Reflex Vertices: lower and upper bounds on the number of pieces
Partitioning Orthogonal Polygons by Extension of All Edges Incident to Reflex Vertices: lower and upper bounds on the number of pieces António Leslie Bajuelos 1, Ana Paula Tomás and Fábio Marques 3 1 Dept.
More informationProblem set 2. Problem 1. Problem 2. Problem 3. CS261, Winter Instructor: Ashish Goel.
CS261, Winter 2017. Instructor: Ashish Goel. Problem set 2 Electronic submission to Gradescope due 11:59pm Thursday 2/16. Form a group of 23 students that is, submit one homework with all of your names.
More informationMaximal Independent Set
Chapter 0 Maximal Independent Set In this chapter we present a highlight of this course, a fast maximal independent set (MIS) algorithm. The algorithm is the first randomized algorithm that we study in
More informationCME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh HW#3 Due at the beginning of class Thursday 03/02/17
CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) HW#3 Due at the beginning of class Thursday 03/02/17 1. Consider a model of a nonbipartite undirected graph in which
More information