Theorem 2.9: nearest addition algorithm
|
|
- Cuthbert Parrish
- 5 years ago
- Views:
Transcription
1 There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used to solve the Hamiltonian cycle problem: Given a graph =(,), form an input to the TSP by setting, for each pair,, the cost =1 if (,, and equal to +2otherwise If there is a Hamiltonian cycle in, then there is a tour of cost, and otherwise each tour costs at least + 1 MAT ApprAl, Spring Mar If there were to exist a 2-approximation algorithm for the TSP, then we could use it to distinguish graphs with Hamiltonian cycles from those without any Run the approximation algorithm on the new TSP input, and if the tour computed has cost at most, then there exists a Hamiltonian cycle in, and otherwise there does not Setting the cost for the non-edges to be +2 has a similarly inflated consequence, and we obtain an input to the TSP of polynomial size provided that, for example, = (2 ) MAT ApprAl, Spring Mar
2 Theorem 2.9: For any > 1, there does not exist an -approximation algorithm for the traveling salesman problem on cities, provided NP. In fact, the existence of an (2 )-approximation algorithm for the TSP would similarly imply that P = NP. This however is not the end of the story A natural assumption to make about the input to the TSP is to restrict attention to those inputs that are metric; that is, for each triple,, we have that the triangle inequality holds + MAT ApprAl, Spring Mar This rules out the construction used in the reduction for the HC problem above; the nonedges can be given cost at most 2 for the triangle inequality to hold, and this is too small to yield a nontrivial nonapproximability result A natural greedy heuristic to consider for the TSP; this is often referred to as the nearest addition algorithm Find the two closest cities, say, and, and start by building a tour on that pair of cities consisting of going from to and then back to again This is the first iteration MAT ApprAl, Spring Mar
3 In each subsequent iteration, we extend the tour on the current subset by including one additional city, until we include all cities In each iteration, we find a pair of cities and for which the cost is minimum; let be the city that follows in the current tour on We add to, and insert into the current tour between and MAT ApprAl, Spring Mar The crux of the analysis is the relationship of this algorithm to Prim s algorithm for the minimum spanning tree (MST) in an undirected graph A spanning tree of a connected graph =(,) is a minimal subset of edges such that each pair of nodes in is connected by a path using edges only in A MST is a spanning tree for which the total edge cost is minimized Prim s algorithm computes a MST by iteratively constructing a set along with a tree, starting with ={}for some (arbitrarily chosen) node and = (, )with = MAT ApprAl, Spring Mar
4 MAT ApprAl, Spring Mar In each iteration, it determines the edge (, ) such that and is of minimum cost, and adds the edge (, ) to Clearly, this is the same sequence of vertex pairs identified by the nearest addition algorithm Furthermore, there is another important relationship between MST problem and TSP Lemma 2.10: For any input to the traveling salesman problem, the cost of the optimal tour is at least the cost of the minimum spanning tree on the same input MAT ApprAl, Spring Mar
5 Theorem 2.11: The nearest addition algorithm for the metric TSP is a 2-approximation algorithm. Proof: Let,,, = 1,, be the subsets identified at the end of each iteration of the nearest addition algorithm (where =), and let =,,,,,,, where, is the edge identified in iteration 1(with, = 3,,). We also know that 1,,, is a MST for the original input, when viewed as a complete undirected graph with edge costs. Thus, if OPT is the optimal value for the TSP input, then OPT MAT ApprAl, Spring Mar The cost of the tour on the first two nodes and is exactly 2. Consider an iteration in which a city is inserted between cities and. How much does the length of the tour increase? An easy calculation gives +. By the triangle inequality, we have that + or, equivalently,. Hence, the increase in cost in this iteration is at most + =2. Thus, overall, we know that the final tour has cost at most 2 2OPT, and the theorem is proved. MAT ApprAl, Spring Mar
6 A graph is said to be Eulerian if there exists a permutation of its edges of the form (, ),,,,,,(, ) We will call this permutation a traversal of the edges, since it allows us to visit every edge exactly once A graph is Eulerian if and only if it is connected and each node has even degree (the number of edges with as one of its endpoints) Furthermore, if a graph is Eulerian, one can easily construct the required traversal of the edges, given the graph MAT ApprAl, Spring Mar To find a good tour for a TSP input, suppose that we first compute a MST (e.g., by Prim s alg.) Suppose that we then replace each edge by two copies of itself The resulting (multi)graph has cost at most 2OPT and is Eulerian We can construct a tour of the cities from the Eulerian traversal of the edges, (, ),,,,,,(, ) Consider the sequence of nodes,,,,, and remove all but the first occurrence of each city in this sequence MAT ApprAl, Spring Mar
7 MAT ApprAl, Spring Mar This yields a tour containing each city exactly once (assuming we then return to at the end) To bound the length of this tour, consider two consecutive cities in this tour, and Cities,, have already been visited earlier in the tour By the triangle inequality,, can be upper bounded by the total cost of the edges traversed in the Eulerian traversal between and, i.e., the total cost of the edges (, ),,(, ) In total, the cost of the tour is at most the total cost of all of the edges in the Eulerian graph, which is at most 2OPT MAT ApprAl, Spring Mar
8 Theorem 2.12: The double-tree algorithm for the metric traveling salesman problem is a 2- approximation algorithm. The message of the analysis of the double-tree algorithm is also quite useful If we can efficiently construct an Eulerian subgraph of the complete input graph, for which the total edge cost is at most times the optimal value of the TSP input, then we have derived an -approximation algorithm as well This strategy can be carried out to yield a 3/2- approximation algorithm MAT ApprAl, Spring Mar Consider the output from the MST computation This graph is not Eulerian, since any tree must have nodes of degree one, but it is possible that not many nodes have odd degree is the set of odd-degree nodes in the MST The sum of node degrees must be even, since each edge in the graph contributes 2 to this total The total degree of the even-degree nodes must also be even, but then the total degree of the odd-degree nodes must also be even I.e., we must have an even number of odddegree nodes; = 2 for some positive MAT ApprAl, Spring Mar
9 Suppose that we pair up the nodes in : (, ),,,,(, ) Such a collection of edges that contain each node in exactly once is called a perfect matching of Classic result of combinatorial optimization: given a complete graph (on an even number of nodes) with edge costs, it is possible to compute the perfect matching of minimum total cost in polynomial time Given the MST, we identify the set of odddegree nodes with even cardinality, and then compute a minimum-cost perfect matching on MAT ApprAl, Spring Mar MAT ApprAl, Spring Mar
10 Add this set of edges to our MST to construct an Eulerian graph on our original set of cities connected since the MST is connected has even degree, since we added a new edge incident to each node of odd degree in the MST As in the double-tree algorithm, we can shortcut this graph to produce a tour of no greater cost This is known as Christofides algorithm Theorem 2.13: Christofides algorithm for the metric traveling salesman problem is a 3/2- approximation algorithm. MAT ApprAl, Spring Mar Proof: Show that the edges in the Eulerian graph produced by the algorithm have total cost at most. We know that the MST edges have total cost at most OPT. So we need only show that the perfect matching on has cost at most OPT/2. First observe that there is a tour on just the nodes in of total cost at most OPT. This again uses the shortcutting argument. Start with the optimal tour on the entire set of cities, and if for two cities and, the optimal tour between and contains only cities that are not in, then include edge (, ) in the tour on. MAT ApprAl, Spring Mar
11 Each edge in the tour corresponds to disjoint paths in the original tour, and hence by the triangle inequality, the total length of the tour on is no more than the length of the original tour. Now consider this shortcut tour on the node set. Color these edges red and blue, alternating colors. This partitions the edges into two sets; each of these is a perfect matching on the node set. In total, these two edge sets have cost OPT. Thus, the cheaper of these two sets has cost OPT/2. Hence, there is a perfect matching on of cost OPT/2. Therefore, the algorithm must find a matching of cost at most OPT/2. MAT ApprAl, Spring Mar No better approximation algorithm for the metric TSP is known Substantially better algorithms might yet be found, since the strongest negative result is Theorem 2.14: Unless P = NP, for any constant < , no -approximation algorithm for the metric TSP exists. It is possible to obtain a PTAS in the case that cities correspond to points in the Euclidean plane and the cost of traveling between two cities is equal to the Euclidean distance between the two points MAT ApprAl, Spring Mar
12 3. Rounding data and dynamic programming The knapsack problem Scheduling jobs on identical parallel machines The bin-packing problem Dynamic programming is a standard technique in algorithm design an optimal solution for a problem is built up from optimal solutions for a number of subproblems, normally stored in a table or multidimensional array Approximation algorithms can be designed using dynamic programming in a variety of ways, many of which involve rounding the input data in some way MAT ApprAl, Spring Mar
13 3.1 The knapsack problem We are given a set of items = {1,,}, where each item has a value and a size All sizes and values are positive integers The knapsack has a positive integer capacity The goal is to find a subset of items that maximizes the value of items in the knapsack subject to the constraint that the total size of these items is no more than the capacity; that is, MAT ApprAl, Spring Mar We consider only items that could actually fit in the knapsack, so that for each We can use dynamic programming to find the optimal solution to the knapsack problem We maintain an array entry () for = 1,, Each entry () is a list of pairs (, ) A pair (, ) in the list of entry () indicates that there is a set from the first items that uses space exactly and has value exactly I.e., there exists a set {1,,}s.t. = and = MAT ApprAl, Spring Mar
14 Each list keeps track of only the most efficient pairs To do this, we need the notion of one pair dominating another one: (, ) dominates another pair (, )if and ; that is, the solution indicated by the pair (, )uses no more space than (, ), but has at least as much value Domination is a transitive property: if (, ) dominates (, ) which dominates (, ), then (, ) also dominates (, ) We will ensure that in any list, no pair dominates another one MAT ApprAl, Spring Mar This means that we can assume each list () is of the form (, ),,(, )with < < < and < < Since the sizes of the items are integers, there are at most +1pairs in each list Furthermore, if we let = be the maximum possible value for the knapsack, then there can be at most +1pairs in the list Finally, we ensure that for each feasible set {1,,}(with ), the list () contains some pair (, ) that dominates, MAT ApprAl, Spring Mar
15 Alg. 3.1 is the dynamic program that constructs the lists () and solves the knapsack problem We start out with (1) = {(0,0), (, )} For each = 2,,, we do the following We first set ( ( 1), and for each (, ( 1), we also add the pair ( +, + ) to the list () if + We finally remove from () all dominated pairs by sorting the list with respect to their space component, MAT ApprAl, Spring Mar retaining the best value for each space total possible, and removing any larger space total that does not have a corresponding larger value One way to view this process is to generate two lists, ( 1) and the one augmented by (, ), and then perform a type of merging of these two lists We return the pair (, ) from () of maximum value as our solution MAT ApprAl, Spring Mar
16 ALGORITHM 3.1: A dynamic programming algorithm for the knapsack problem 1. (1) {(0,0), (, )} 2. for 2to do 3. ( ( 1) 4. for each (, ( 1) do 5. if + then 6. Add ( +, + ) to () 7. Remove dominated pairs from () 8. return max (,)() MAT ApprAl, Spring Mar Theorem 3.1: Algorithm 3.1 correctly computes the optimal value of the knapsack problem. Proof: By induction on we prove that () contains all non-dominated pairs corresponding to feasible sets {1,,}. Certainly this is true in the base case by setting (1) to {(0,0), (, )}. Suppose it is true for ( 1). Let {1,,}, and let = and =. We claim that there is some pair (, ()such that and. MAT ApprAl, Spring Mar
17 First, suppose that. Then the claim follows by the induction hypothesis and by the fact that we initially set () to ( 1) and removed dominated pairs. Now suppose. Then for = {}, by the induction hypothesis, there is some (, ( 1) that dominates,,, so that and. Then the algorithm will add the pair ( +, + ) to (), where + and +. Thus, there will be some pair (, ()that dominates (, ). MAT ApprAl, Spring Mar Algorithm 3.1 takes (min(, ))time This is not a polynomial-time algorithm, since we assume that all input numbers are encoded in binary; thus, the size of the input number is essentially log, and so the running time () is exponential in the size of the input number, not polynomial If we were to assume that the input is given in unary, then () would be a polynomial in the size of the input It is sometimes useful to make this distinction between problems MAT ApprAl, Spring Mar
18 Definition 3.2: An algorithm for a problem is said to be pseudopolynomial if its running time is polynomial in the size of the input when the numeric part of the input is encoded in unary. If the maximum possible value were some polynomial in, then the running time would indeed be a polynomial in the input size We now show how to get a PTAS for the knapsack problem by rounding the values of the items so that is indeed a polynomial in The rounding induces some loss of precision in the value of a solution, but this does not affect the final value by too much MAT ApprAl, Spring Mar Recall the definition of an approximation scheme Definition 3.3: A polynomial-time approximation scheme (PTAS) is a family of algorithms { }, where there is an algorithm for each >0, such that is a (1+)-approximation algorithm (for minimization problems) or a (1)-approximation algorithm (for maximization problems). The running time of the algorithm is allowed to depend arbitrarily on 1/: this dependence could be exponential in 1/, or worse We often focus attention on algorithms for which we can give a good bound of the dependence of the running time of on 1/ MAT ApprAl, Spring Mar
19 Definition 3.4: A fully polynomial-time approximation scheme (FPAS, FPTAS) is an approximation scheme such that the running time of is bounded by a polynomial in 1/. We can now give a FPTAS for knapsack Let us measure value in (integer) multiples of (we set below), and convert each value by rounding down to the nearest multiple of Precisely, we set = for each item We then run the Algorithm 3.1 on the items with sizes and values, and output the optimal solution for the rounded data as a near-optimal solution for the true data MAT ApprAl, Spring Mar The main idea: Show that the accuracy we lose in rounding is not so great, and yet the rounding enables us to have the algorithm run in polynomial time Let us first do a rough estimate; if we used values = instead of, then each value is inaccurate by at most, and so each feasible solution has its value changed by at most We want the error introduced to be times a lower bound on the optimal value (and so be sure that the true relative error is at most ) Let be the maximum value of an item; that is, = max MAT ApprAl, Spring Mar
20 is a lower bound on OPT, since one can pack the most valuable item in the knapsack by itself Thus, it makes sense to set so that = or, in other words, to set = / Note that with the modified values, = = = / Thus, the running time of the algorithm is (min(, )) = ( /) and is bounded by a polynomial in 1/ The algorithm returns a solution whose value is at least (1 ) times that of an optimal solution MAT ApprAl, Spring Mar ALGORITHM 3.2: An approximation scheme for the knapsack problem 1. max 2. / 3. for all 4. Run Algorithm 3.1 for knapsack instance with values MAT ApprAl, Spring Mar
21 Theorem 3.5: Algorithm 3.2 is a FPTAS for the knapsack problem. Proof: We need to show that the algorithm returns a solution whose value is at least (1) times the value of an optimal solution. Let be the set of items returned by the algorithm. Let be an optimal set of items. Certainly OPT (put the most valuable item in a knapsack by itself). Furthermore, by the definition of, ( + 1), so that. Applying the definitions of the rounded data, along with the fact that is an optimal solution for the values, we can derive the following inequalities: MAT ApprAl, Spring Mar = OPT OPT = 1 OPT. MAT ApprAl, Spring Mar
22 3.2 Scheduling jobs on identical parallel machines Earlier we saw that by first sorting the jobs in order of non-increasing processing requirement, and then using a list scheduling rule, we find a schedule of length guaranteed to be at most 4/3 times the optimum This result contains the seeds of a PTAS For any given value of >1, we give an algorithm that runs in polynomial time and finds a solution of objective function value at most times the optimal value MAT ApprAl, Spring Mar As earlier, let the processing requirement of job be, = 1,,, and let denote the length of a given schedule with job completion times, =1,,; the optimal value is denoted Each processing requirement is positive integer In the analysis of the list scheduling rule its error can be upper bounded by the processing requirement of the last job to complete MAT ApprAl, Spring Mar
23 The 4/3-approximation was based on this fact, combined with the observation that when each job s processing requirement is more than /3, this natural greedy-type algorithm actually finds the optimal solution We present an approximation scheme for this problem based on a similar principle Focus on a specified subset of the longest jobs, and compute the optimal schedule for that subset Then we extend that partial schedule by using list scheduling on the remaining jobs There is a trade-off between the number of long jobs and the quality of the solution found MAT ApprAl, Spring Mar Let be a fixed positive integer; we will derive a family of algorithms, and focus on among them We partition the job set into two parts: the long jobs and the short jobs, where a job is considered short if This implies that there are at most long jobs Enumerate all schedules for the long jobs, and choose one with the minimum makespan Extend this schedule by using list scheduling for the short jobs I.e., given an arbitrary order of the short jobs, schedule these jobs in order, always assigning the next job to the machine currently least loaded MAT ApprAl, Spring Mar
24 Consider the running time of algorithm To specify a schedule for the long jobs, we simply indicate to which of the machines each long job is assigned; thus, there are at most distinct assignments If we focus on the special case of this problem in which the number of machines is a constant (say, 100, 1,000, or even 1,000,000), then this number is also a constant, not depending on the size of the input Thus, we can check each schedule, and determine the optimal length schedule in polynomial time in this special case MAT ApprAl, Spring Mar As in the analysis of the local search algorithm earlier, we focus on the last job to finish Recall that we derived the equality that + The validity of this inequality relied only on the fact that each machine is busy up until the time that job starts To analyze the algorithm that starts by finding the optimal schedule for the long jobs, we distinguish now between two cases MAT ApprAl, Spring Mar
25 If the last job to finish (in the entire schedule), job, is a short job, then this job was scheduled by the list scheduling rule, and it follows that the previous inequality holds Since job is short, and hence (), it also follows that If is a long job, then the schedule delivered is optimal, since its makespan equals the length of the optimal schedule for just the long jobs, which is clearly no more than for the entire input MAT ApprAl, Spring Mar The algorithm can easily be implemented to run in polynomial time (treating as a constant) Theorem 3.6: The family of algorithms is a polynomial-time approximation scheme for the problem of minimizing the makespan on any constant number of identical parallel machines. A significant limitation is that the number of machines needs to be a constant It is not too hard to extend these techniques to obtain a PTAS even if the number of machines is allowed to be an input parameter MAT ApprAl, Spring Mar
26 We didn t really need the schedule for the long jobs to be optimal We used the optimality of the schedule only when the last job to finish was a long job If we had found a schedule for the long jobs that had makespan at most 1+ times the optimal value, then that would have been sufficient We will see how to obtain this near-optimal schedule for long jobs by rounding input sizes and dynamic programming, as previously on the knapsack problem MAT ApprAl, Spring Mar Let us first set a target length for the schedule As before, we also fix a positive integer ; we will design a family of algorithms { } where either proves that no schedule of length exists, or else finds a schedule of length 1+ Later we will show how such a family of algorithms also implies the existence of a polynomial-time approximation scheme We can assume that otherwise no feasible schedule exists, since MAT ApprAl, Spring Mar
27 The algorithm is quite simple Partition the jobs into long and short, but require that > for to be long We round down the processing requirement of each long job to its nearest multiple of / We will determine in polynomial time whether or not there is a feasible schedule for these rounded long jobs that completes within time If yes, we interpret it as a schedule for the long jobs with their original processing requirements If not, we conclude that no feasible schedule of length exists for the original input MAT ApprAl, Spring Mar Finally, we extend this schedule to include the short jobs by using the list scheduling algorithm We need to prove that the algorithm always produces a schedule of length at most 1+ whenever there exists a schedule of length at most When the original input has a schedule of length, then so does the reduced input consisting only of the rounded long jobs (which is why we rounded down the processing requirements); in this case, the algorithm does compute a schedule for the original input MAT ApprAl, Spring Mar
28 Suppose that a schedule is found It starts with a schedule of length at most for the rounded long jobs Let be the set of jobs assigned by this schedule to one machine Since each job in is long, and hence has rounded size at least /, it follows that Furthermore, for each job, the difference between its true processing requirement and its rounded one is at most / (because rounded value is a multiple of / ) + = MAT ApprAl, Spring Mar Now consider the effect of assigning the short jobs: each job, in turn, is assigned to a machine for which the current load is smallest Since, also Since the average load assigned to a machine is less than, there must exist a machine that is currently assigned jobs of total processing requirement less than So, when we choose the machine that currently has the lightest load, and then add job, this machine s new load is at most + < + = 1+ 1 MAT ApprAl, Spring Mar
29 Hence, the schedule produced by list scheduling will also be of length at most 1+1 To complete, we must still show that we can use dynamic programming to decide if there is a schedule of length for the rounded long jobs Clearly if there is a rounded long job of size greater than, then there is no such schedule Otherwise, describe an input by a -dimensional vector, where the th component specifies the # of long jobs of rounded size equal to /, = 1,, We know that for <, there are no such jobs, since that would imply that their original processing requirement was less than /, and hence not long MAT ApprAl, Spring Mar So there are at most distinct inputs a polynomial number! How many distinct ways are there to feasibly assign long jobs to one machine? Each rounded long job still has processing time at least / at most jobs are assigned to one machine Again, an assignment to one machine can be described by a -dimensional vector, where again the th component specifies the number of long jobs of rounded size equal to / that are assigned to that machine MAT ApprAl, Spring Mar
30 Consider the vector (,,, ); we call it a machine configuration if / Let be the set of all machine configurations Note that there are at most ( + 1) distinct configurations, since each machine must process a number of rounded long jobs that is in the set {0,1,,} Since is fixed, this means that there are a constant number of configurations MAT ApprAl, Spring Mar Let OPT(,, ) be the min # of machines sufficient to schedule this arbitrary input This value is given by the following recurrence (assign some jobs to one machine, and then using as few machines as possible for the rest): OPT,, =1+ min (,,, OPT,, View as a table with a polynomial number of entries (one for each possible input type) To compute each entry, find the minimum over a constant number of previously computed values The desired schedule exists exactly when the corresponding optimal value is at most MAT ApprAl, Spring Mar
31 Finally, one can convert the family of algorithms { } into a PTAS Theorem 3.7: There is a PTAS for the problem of minimizing the makespan on an input number of identical parallel machines. Note that since we consider ( + 1) configurations and 1/, the running time in the worst case is exponential in (1/ ) Thus, in this case, we did not obtain a FPTAS (in contrast to the knapsack problem) MAT ApprAl, Spring Mar This is for a fundamental reason This scheduling problem is strongly NPcomplete Even if we require that the processing times be restricted to values at most (), a polynomial function of the number of jobs, this special case is still NP-complete If a FPTAS exists for this problem, it could be used to solve this special case in polynomial time, which would imply that P = NP MAT ApprAl, Spring Mar
Lecture 8: The Traveling Salesman Problem
Lecture 8: The Traveling Salesman Problem Let G = (V, E) be an undirected graph. A Hamiltonian cycle of G is a cycle that visits every vertex v V exactly once. Instead of Hamiltonian cycle, we sometimes
More informationApproximation Algorithms
Chapter 8 Approximation Algorithms Algorithm Theory WS 2016/17 Fabian Kuhn Approximation Algorithms Optimization appears everywhere in computer science We have seen many examples, e.g.: scheduling jobs
More informationIntroduction to Approximation Algorithms
Introduction to Approximation Algorithms Dr. Gautam K. Das Departmet of Mathematics Indian Institute of Technology Guwahati, India gkd@iitg.ernet.in February 19, 2016 Outline of the lecture Background
More informationval(y, I) α (9.0.2) α (9.0.3)
CS787: Advanced Algorithms Lecture 9: Approximation Algorithms In this lecture we will discuss some NP-complete optimization problems and give algorithms for solving them that produce a nearly optimal,
More informationModule 6 NP-Complete Problems and Heuristics
Module 6 NP-Complete Problems and Heuristics Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu P, NP-Problems Class
More informationCS261: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem
CS61: A Second Course in Algorithms Lecture #16: The Traveling Salesman Problem Tim Roughgarden February 5, 016 1 The Traveling Salesman Problem (TSP) In this lecture we study a famous computational problem,
More information1 Variations of the Traveling Salesman Problem
Stanford University CS26: Optimization Handout 3 Luca Trevisan January, 20 Lecture 3 In which we prove the equivalence of three versions of the Traveling Salesman Problem, we provide a 2-approximate algorithm,
More informationAlgorithm Design and Analysis
Algorithm Design and Analysis LECTURE 29 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/7/2016 Approximation
More informationDecision Problems. Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not.
Decision Problems Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not. Definition: The class of problems that can be solved by polynomial-time
More informationNotes for Recitation 9
6.042/18.062J Mathematics for Computer Science October 8, 2010 Tom Leighton and Marten van Dijk Notes for Recitation 9 1 Traveling Salesperson Problem Now we re going to talk about a famous optimization
More informationModule 6 P, NP, NP-Complete Problems and Approximation Algorithms
Module 6 P, NP, NP-Complete Problems and Approximation Algorithms Dr. Natarajan Meghanathan Associate Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu
More informationCS270 Combinatorial Algorithms & Data Structures Spring Lecture 19:
CS270 Combinatorial Algorithms & Data Structures Spring 2003 Lecture 19: 4.1.03 Lecturer: Satish Rao Scribes: Kevin Lacker and Bill Kramer Disclaimer: These notes have not been subjected to the usual scrutiny
More informationCMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017
CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017 Reading: Section 9.2 of DPV. Section 11.3 of KT presents a different approximation algorithm for Vertex Cover. Coping
More informationApproximation Algorithms
Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours
More informationTraveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost
Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R + Goal: find a tour (Hamiltonian cycle) of minimum cost Traveling Salesman Problem (TSP) Input: undirected graph G=(V,E), c: E R
More informationCS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018
CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 2018 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.
More informationApproximation Algorithms
Approximation Algorithms Given an NP-hard problem, what should be done? Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality.
More informationAssignment 5: Solutions
Algorithm Design Techniques Assignment 5: Solutions () Port Authority. [This problem is more commonly called the Bin Packing Problem.] (a) Suppose K = 3 and (w, w, w 3, w 4 ) = (,,, ). The optimal solution
More informationCMPSCI611: The SUBSET-SUM Problem Lecture 18
CMPSCI611: The SUBSET-SUM Problem Lecture 18 We begin today with the problem we didn t get to at the end of last lecture the SUBSET-SUM problem, which we also saw back in Lecture 8. The input to SUBSET-
More informationUnit 8: Coping with NP-Completeness. Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems. Y.-W.
: Coping with NP-Completeness Course contents: Complexity classes Reducibility and NP-completeness proofs Coping with NP-complete problems Reading: Chapter 34 Chapter 35.1, 35.2 Y.-W. Chang 1 Complexity
More informationCOMP 355 Advanced Algorithms Approximation Algorithms: VC and TSP Chapter 11 (KT) Section (CLRS)
COMP 355 Advanced Algorithms Approximation Algorithms: VC and TSP Chapter 11 (KT) Section 35.1-35.2(CLRS) 1 Coping with NP-Completeness Brute-force search: This is usually only a viable option for small
More information1 The Traveling Salesperson Problem (TSP)
CS 598CSC: Approximation Algorithms Lecture date: January 23, 2009 Instructor: Chandra Chekuri Scribe: Sungjin Im In the previous lecture, we had a quick overview of several basic aspects of approximation
More informationChapter 9 Graph Algorithms
Introduction graph theory useful in practice represent many real-life problems can be if not careful with data structures Chapter 9 Graph s 2 Definitions Definitions an undirected graph is a finite set
More informationAPPROXIMATION ALGORITHMS FOR GEOMETRIC PROBLEMS
APPROXIMATION ALGORITHMS FOR GEOMETRIC PROBLEMS Subhas C. Nandy (nandysc@isical.ac.in) Advanced Computing and Microelectronics Unit Indian Statistical Institute Kolkata 70010, India. Organization Introduction
More informationInstitute of Operating Systems and Computer Networks Algorithms Group. Network Algorithms. Tutorial 4: Matching and other stuff
Institute of Operating Systems and Computer Networks Algorithms Group Network Algorithms Tutorial 4: Matching and other stuff Christian Rieck Matching 2 Matching A matching M in a graph is a set of pairwise
More informationModule 6 NP-Complete Problems and Heuristics
Module 6 NP-Complete Problems and Heuristics Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 397 E-mail: natarajan.meghanathan@jsums.edu Optimization vs. Decision
More informationApproximation Algorithms
Presentation for use with the textbook, Algorithm Design and Applications, by M. T. Goodrich and R. Tamassia, Wiley, 2015 Approximation Algorithms Tamassia Approximation Algorithms 1 Applications One of
More informationNotes for Lecture 24
U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined
More informationGreedy algorithms is another useful way for solving optimization problems.
Greedy Algorithms Greedy algorithms is another useful way for solving optimization problems. Optimization Problems For the given input, we are seeking solutions that must satisfy certain conditions. These
More informationGreedy algorithms Or Do the right thing
Greedy algorithms Or Do the right thing March 1, 2005 1 Greedy Algorithm Basic idea: When solving a problem do locally the right thing. Problem: Usually does not work. VertexCover (Optimization Version)
More informationApproximation Algorithms
Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More informationGraphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs
Graphs and Network Flows IE411 Lecture 21 Dr. Ted Ralphs IE411 Lecture 21 1 Combinatorial Optimization and Network Flows In general, most combinatorial optimization and integer programming problems are
More informationTechnische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik. Combinatorial Optimization (MA 4502)
Technische Universität München, Zentrum Mathematik Lehrstuhl für Angewandte Geometrie und Diskrete Mathematik Combinatorial Optimization (MA 4502) Dr. Michael Ritter Problem Sheet 4 Homework Problems Problem
More information6 Randomized rounding of semidefinite programs
6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can
More informationChapter 9 Graph Algorithms
Chapter 9 Graph Algorithms 2 Introduction graph theory useful in practice represent many real-life problems can be slow if not careful with data structures 3 Definitions an undirected graph G = (V, E)
More informationNP Completeness. Andreas Klappenecker [partially based on slides by Jennifer Welch]
NP Completeness Andreas Klappenecker [partially based on slides by Jennifer Welch] Dealing with NP-Complete Problems Dealing with NP-Completeness Suppose the problem you need to solve is NP-complete. What
More informationChapter 9 Graph Algorithms
Chapter 9 Graph Algorithms 2 Introduction graph theory useful in practice represent many real-life problems can be if not careful with data structures 3 Definitions an undirected graph G = (V, E) is a
More information11/22/2016. Chapter 9 Graph Algorithms. Introduction. Definitions. Definitions. Definitions. Definitions
Introduction Chapter 9 Graph Algorithms graph theory useful in practice represent many real-life problems can be slow if not careful with data structures 2 Definitions an undirected graph G = (V, E) is
More informationPolynomial time approximation algorithms
Polynomial time approximation algorithms Doctoral course Optimization on graphs - Lecture 5.2 Giovanni Righini January 18 th, 2013 Approximation algorithms There are several reasons for using approximation
More informationModule 6 NP-Complete Problems and Heuristics
Module 6 NP-Complete Problems and Heuristics Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 97 E-mail: natarajan.meghanathan@jsums.edu Optimization vs. Decision
More informationProve, where is known to be NP-complete. The following problems are NP-Complete:
CMPSCI 601: Recall From Last Time Lecture 21 To prove is NP-complete: Prove NP. Prove, where is known to be NP-complete. The following problems are NP-Complete: SAT (Cook-Levin Theorem) 3-SAT 3-COLOR CLIQUE
More information1 The Traveling Salesman Problem
Comp 260: Advanced Algorithms Tufts University, Spring 2018 Prof. Lenore Cowen Scribe: Duc Nguyen Lecture 3a: The Traveling Salesman Problem 1 The Traveling Salesman Problem The Traveling Salesman Problem
More informationV1.0: Seth Gilbert, V1.1: Steven Halim August 30, Abstract. d(e), and we assume that the distance function is non-negative (i.e., d(x, y) 0).
CS4234: Optimisation Algorithms Lecture 4 TRAVELLING-SALESMAN-PROBLEM (4 variants) V1.0: Seth Gilbert, V1.1: Steven Halim August 30, 2016 Abstract The goal of the TRAVELLING-SALESMAN-PROBLEM is to find
More information2 Approximation Algorithms for Metric TSP
Comp260: Advanced Algorithms Tufts University, Spring 2002 Professor Lenore Cowen Scribe: Stephanie Tauber Lecture 3: The Travelling Salesman Problem (TSP) 1 Introduction A salesman wishes to visit every
More informationCOMP Analysis of Algorithms & Data Structures
COMP 3170 - Analysis of Algorithms & Data Structures Shahin Kamali Approximation Algorithms CLRS 35.1-35.5 University of Manitoba COMP 3170 - Analysis of Algorithms & Data Structures 1 / 30 Approaching
More informationChristofides Algorithm
2. compute minimum perfect matching of odd nodes 2. compute minimum perfect matching of odd nodes 2. compute minimum perfect matching of odd nodes 3. find Eulerian walk node order 2. compute minimum perfect
More informationCS 4407 Algorithms. Lecture 8: Circumventing Intractability, using Approximation and other Techniques
CS 4407 Algorithms Lecture 8: Circumventing Intractability, using Approximation and other Techniques Prof. Gregory Provan Department of Computer Science University College Cork CS 4010 1 Lecture Outline
More informationTheory of Computing. Lecture 10 MAS 714 Hartmut Klauck
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck Seven Bridges of Königsberg Can one take a walk that crosses each bridge exactly once? Seven Bridges of Königsberg Model as a graph Is there a path
More informationDynamic programming. Trivial problems are solved first More complex solutions are composed from the simpler solutions already computed
Dynamic programming Solves a complex problem by breaking it down into subproblems Each subproblem is broken down recursively until a trivial problem is reached Computation itself is not recursive: problems
More informationCME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh HW#3 Due at the beginning of class Thursday 02/26/15
CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh (rezab@stanford.edu) HW#3 Due at the beginning of class Thursday 02/26/15 1. Consider a model of a nonbipartite undirected graph in which
More information3 Euler Tours, Hamilton Cycles, and Their Applications
3 Euler Tours, Hamilton Cycles, and Their Applications 3.1 Euler Tours and Applications 3.1.1 Euler tours Carefully review the definition of (closed) walks, trails, and paths from Section 1... Definition
More informationCoping with NP-Completeness
Coping with NP-Completeness Siddhartha Sen Questions: sssix@cs.princeton.edu Some figures obtained from Introduction to Algorithms, nd ed., by CLRS Coping with intractability Many NPC problems are important
More information11.1 Facility Location
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local
More information6. Lecture notes on matroid intersection
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans May 2, 2017 6. Lecture notes on matroid intersection One nice feature about matroids is that a simple greedy algorithm
More informationNP-Hardness. We start by defining types of problem, and then move on to defining the polynomial-time reductions.
CS 787: Advanced Algorithms NP-Hardness Instructor: Dieter van Melkebeek We review the concept of polynomial-time reductions, define various classes of problems including NP-complete, and show that 3-SAT
More informationAdvanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret
Advanced Algorithms Class Notes for Monday, October 23, 2012 Min Ye, Mingfu Shao, and Bernard Moret Greedy Algorithms (continued) The best known application where the greedy algorithm is optimal is surely
More information11. APPROXIMATION ALGORITHMS
11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005
More information1 The Traveling Salesman Problem
Comp 260: Advanced Algorithms Tufts University, Spring 2011 Prof. Lenore Cowen Scribe: Jisoo Park Lecture 3: The Traveling Salesman Problem 1 The Traveling Salesman Problem The Traveling Salesman Problem
More informationMatching 4/21/2016. Bipartite Matching. 3330: Algorithms. First Try. Maximum Matching. Key Questions. Existence of Perfect Matching
Bipartite Matching Matching 3330: Algorithms A graph is bipartite if its vertex set can be partitioned into two subsets A and B so that each edge has one endpoint in A and the other endpoint in B. A B
More information/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18
601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can
More informationOptimal tour along pubs in the UK
1 From Facebook Optimal tour along 24727 pubs in the UK Road distance (by google maps) see also http://www.math.uwaterloo.ca/tsp/pubs/index.html (part of TSP homepage http://www.math.uwaterloo.ca/tsp/
More informationPolynomial Time Approximation Schemes for the Euclidean Traveling Salesman Problem
PROJECT FOR CS388G: ALGORITHMS: TECHNIQUES/THEORY (FALL 2015) Polynomial Time Approximation Schemes for the Euclidean Traveling Salesman Problem Shanshan Wu Vatsal Shah October 20, 2015 Abstract In this
More informationLecture 24: More Reductions (1997) Steven Skiena. skiena
Lecture 24: More Reductions (1997) Steven Skiena Department of Computer Science State University of New York Stony Brook, NY 11794 4400 http://www.cs.sunysb.edu/ skiena Prove that subgraph isomorphism
More informationApproximation Algorithms
18.433 Combinatorial Optimization Approximation Algorithms November 20,25 Lecturer: Santosh Vempala 1 Approximation Algorithms Any known algorithm that finds the solution to an NP-hard optimization problem
More information(Refer Slide Time: 01:00)
Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture minus 26 Heuristics for TSP In this lecture, we continue our discussion
More information35 Approximation Algorithms
35 Approximation Algorithms Many problems of practical significance are NP-complete, yet they are too important to abandon merely because we don t know how to find an optimal solution in polynomial time.
More informationCoping with the Limitations of Algorithm Power Exact Solution Strategies Backtracking Backtracking : A Scenario
Coping with the Limitations of Algorithm Power Tackling Difficult Combinatorial Problems There are two principal approaches to tackling difficult combinatorial problems (NP-hard problems): Use a strategy
More informationCSE 417 Branch & Bound (pt 4) Branch & Bound
CSE 417 Branch & Bound (pt 4) Branch & Bound Reminders > HW8 due today > HW9 will be posted tomorrow start early program will be slow, so debugging will be slow... Review of previous lectures > Complexity
More informationAdvanced Methods in Algorithms HW 5
Advanced Methods in Algorithms HW 5 Written by Pille Pullonen 1 Vertex-disjoint cycle cover Let G(V, E) be a finite, strongly-connected, directed graph. Let w : E R + be a positive weight function dened
More information1 Better Approximation of the Traveling Salesman
Stanford University CS261: Optimization Handout 4 Luca Trevisan January 13, 2011 Lecture 4 In which we describe a 1.5-approximate algorithm for the Metric TSP, we introduce the Set Cover problem, observe
More informationTraveling Salesman Problem. Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij
Traveling Salesman Problem Algorithms and Networks 2014/2015 Hans L. Bodlaender Johan M. M. van Rooij 1 Contents TSP and its applications Heuristics and approximation algorithms Construction heuristics,
More informationChapter Design Techniques for Approximation Algorithms
Chapter 2 Design Techniques for Approximation Algorithms I N THE preceding chapter we observed that many relevant optimization problems are NP-hard, and that it is unlikely that we will ever be able to
More informationTravelling Salesman Problem. Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij
Travelling Salesman Problem Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij 1 Contents TSP and its applications Heuristics and approximation algorithms Construction heuristics,
More informationCMPSCI611: Approximating SET-COVER Lecture 21
CMPSCI611: Approximating SET-COVER Lecture 21 Today we look at two more examples of approximation algorithms for NP-hard optimization problems. The first, for the SET-COVER problem, has an approximation
More information22 Elementary Graph Algorithms. There are two standard ways to represent a
VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph
More informationCSE 548: Analysis of Algorithms. Lecture 13 ( Approximation Algorithms )
CSE 548: Analysis of Algorithms Lecture 13 ( Approximation Algorithms ) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Fall 2017 Approximation Ratio Consider an optimization problem
More informationStrongly Connected Spanning Subgraph for Almost Symmetric Networks
CCC 2015, Kingston, Ontario, August 10 12, 2015 Strongly Connected Spanning Subgraph for Almost Symmetric Networks A. Karim Abu-Affash Paz Carmi Anat Parush Tzur Abstract In the strongly connected spanning
More informationGreedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d
Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,
More informationAdjacent: Two distinct vertices u, v are adjacent if there is an edge with ends u, v. In this case we let uv denote such an edge.
1 Graph Basics What is a graph? Graph: a graph G consists of a set of vertices, denoted V (G), a set of edges, denoted E(G), and a relation called incidence so that each edge is incident with either one
More informationApproximation Algorithms
Approximation Algorithms Subhash Suri June 5, 2018 1 Figure of Merit: Performance Ratio Suppose we are working on an optimization problem in which each potential solution has a positive cost, and we want
More informationThe Encoding Complexity of Network Coding
The Encoding Complexity of Network Coding Michael Langberg Alexander Sprintson Jehoshua Bruck California Institute of Technology Email: mikel,spalex,bruck @caltech.edu Abstract In the multicast network
More informationMatching Algorithms. Proof. If a bipartite graph has a perfect matching, then it is easy to see that the right hand side is a necessary condition.
18.433 Combinatorial Optimization Matching Algorithms September 9,14,16 Lecturer: Santosh Vempala Given a graph G = (V, E), a matching M is a set of edges with the property that no two of the edges have
More informationFundamental Properties of Graphs
Chapter three In many real-life situations we need to know how robust a graph that represents a certain network is, how edges or vertices can be removed without completely destroying the overall connectivity,
More informationAssignment 4 Solutions of graph problems
Assignment 4 Solutions of graph problems 1. Let us assume that G is not a cycle. Consider the maximal path in the graph. Let the end points of the path be denoted as v 1, v k respectively. If either of
More informationNP-Hard (A) (B) (C) (D) 3 n 2 n TSP-Min any Instance V, E Question: Hamiltonian Cycle TSP V, n 22 n E u, v V H
Hard Problems What do you do when your problem is NP-Hard? Give up? (A) Solve a special case! (B) Find the hidden parameter! (Fixed parameter tractable problems) (C) Find an approximate solution. (D) Find
More informationOn Covering a Graph Optimally with Induced Subgraphs
On Covering a Graph Optimally with Induced Subgraphs Shripad Thite April 1, 006 Abstract We consider the problem of covering a graph with a given number of induced subgraphs so that the maximum number
More information5. Lecture notes on matroid intersection
Massachusetts Institute of Technology Handout 14 18.433: Combinatorial Optimization April 1st, 2009 Michel X. Goemans 5. Lecture notes on matroid intersection One nice feature about matroids is that a
More informationCombinatorical Methods
Chapter 9 Combinatorical Methods Popular combinatorical methods in approximation algorithms include the greedy method, dynamic programming, branch and bound, local search, and combinatorical transformations.
More information15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015
15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015 While we have good algorithms for many optimization problems, the previous lecture showed that many
More informationPACKING DIGRAPHS WITH DIRECTED CLOSED TRAILS
PACKING DIGRAPHS WITH DIRECTED CLOSED TRAILS PAUL BALISTER Abstract It has been shown [Balister, 2001] that if n is odd and m 1,, m t are integers with m i 3 and t i=1 m i = E(K n) then K n can be decomposed
More information1 Minimum Spanning Trees (MST) b 2 3 a. 10 e h. j m
Minimum Spanning Trees (MST) 8 0 e 7 b 3 a 5 d 9 h i g c 8 7 6 3 f j 9 6 k l 5 m A graph H(U,F) is a subgraph of G(V,E) if U V and F E. A subgraph H(U,F) is called spanning if U = V. Let G be a graph with
More informationBest known solution time is Ω(V!) Check every permutation of vertices to see if there is a graph edge between adjacent vertices
Hard Problems Euler-Tour Problem Undirected graph G=(V,E) An Euler Tour is a path where every edge appears exactly once. The Euler-Tour Problem: does graph G have an Euler Path? Answerable in O(E) time.
More informationNotes 4 : Approximating Maximum Parsimony
Notes 4 : Approximating Maximum Parsimony MATH 833 - Fall 2012 Lecturer: Sebastien Roch References: [SS03, Chapters 2, 5], [DPV06, Chapters 5, 9] 1 Coping with NP-completeness Local search heuristics.
More informationA subexponential parameterized algorithm for Subset TSP on planar graphs
A subexponential parameterized algorithm for Subset TSP on planar graphs Philip N. Klein Dániel Marx Abstract Given a graph G and a subset S of vertices, the Subset TSP problem asks for a shortest closed
More information11. APPROXIMATION ALGORITHMS
Coping with NP-completeness 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: weighted vertex cover LP rounding: weighted vertex cover generalized load balancing knapsack problem
More informationLecture 2. 1 Introduction. 2 The Set Cover Problem. COMPSCI 632: Approximation Algorithms August 30, 2017
COMPSCI 632: Approximation Algorithms August 30, 2017 Lecturer: Debmalya Panigrahi Lecture 2 Scribe: Nat Kell 1 Introduction In this lecture, we examine a variety of problems for which we give greedy approximation
More informationMinimum Spanning Trees COSC 594: Graph Algorithms Spring By Kevin Chiang and Parker Tooley
Minimum Spanning Trees COSC 594: Graph Algorithms Spring 2017 By Kevin Chiang and Parker Tooley Test Questions 1. What is one NP-Hard problem for which Minimum Spanning Trees is a good approximation for?
More informationPartha Sarathi Mandal
MA 515: Introduction to Algorithms & MA353 : Design and Analysis of Algorithms [3-0-0-6] Lecture 39 http://www.iitg.ernet.in/psm/indexing_ma353/y09/index.html Partha Sarathi Mandal psm@iitg.ernet.in Dept.
More information2. Optimization problems 6
6 2.1 Examples... 7... 8 2.3 Convex sets and functions... 9 2.4 Convex optimization problems... 10 2.1 Examples 7-1 An (NP-) optimization problem P 0 is defined as follows Each instance I P 0 has a feasibility
More informationMatching Theory. Figure 1: Is this graph bipartite?
Matching Theory 1 Introduction A matching M of a graph is a subset of E such that no two edges in M share a vertex; edges which have this property are called independent edges. A matching M is said to
More information