6 Randomized rounding of semidefinite programs
|
|
- Edith Blair
- 5 years ago
- Views:
Transcription
1 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can give us better algorithms than we know how to obtain via LP; in particular we use a type of nonlinear program called a semidefinite program (SDP) Part of the power of SDP is that semidefinite programs can be solved in polynomial time MAT ApprAl, Spring May A brief introduction to semidefinite programming In what follows, is the transpose of the matrix, and vectors are assumed to be column vectors, so that is the inner product of with itself, while is an by matrix Definition 6.1: A matrix is positive semidefinite iff for all, 0. MAT ApprAl, Spring May
2 We abbreviate positive semidefinite as psd 0denotes that a matrix is psd Assume that any psd matrix is also symmetric Fact 6.2: If is a symmetric matrix, then the following statements are equivalent: 1. is psd 2. has non-negative eigenvalues; 3. for some where ; 4. = for some 0and vectors such that =1and =0for = MAT ApprAl, Spring May A SDP is similar to a LP in that there is a linear objective function and linear constraints In addition, a square symmetric matrix of variables can be constrained to be psd In the example below the variables are for max or min subject to, =( )0,, MAT ApprAl, Spring May
3 Given some technical conditions, semidefinite programs can be solved to within an additive error of in time that is polynomial in the size of the input and log(1/) We will usually ignore the additive error when discussing SDPs We assume that the SDPs can be solved exactly, since the algorithms we will use do not assume exact solutions, and one can usually analyze the algorithm that has additive error in the same way with only a small loss in performance guarantee MAT ApprAl, Spring May We often use SDP in the form of vector programming (VP) The variables of VPs are vectors, where the dimension of the space is the number of vectors in the VP The VP has an objective function and constraints that are linear in the inner product of these vectors We write the inner product of and as, or sometimes as Below we give an example of a VP MAT ApprAl, Spring May
4 max or min subject to,, = 1,, We claim that in fact the previous SDP and this VP are equivalent This follows from Fact 6.2 In particular, it follows since a symmetric is psd if and only if = for some matrix MAT ApprAl, Spring May Given a solution to the SDP, we can take the solution, compute in polynomial time a matrix for which (to within small error) We set to be the th column of Then, and the are a feasible solution of the same value to the VP Similarly, given a solution to the vector program, we construct a matrix whose th column is, and let Then is symmetric and psd, with, so that is a feasible solution of the same value for the SDP MAT ApprAl, Spring May
5 6.2 Finding large cuts For this problem, the input is an undirected graph = ( ), and nonnegative weights 0for each edge The goal is to partition the vertex set into two parts, and, so as to maximize the weight of the edges whose two endpoints are in different parts, one in and one in Fair coin flipping gives a -approximation algorithm for the maximum cut problem MAT ApprAl, Spring May We now use SDP to give a.878-approximation algorithm for the problem in general graphs We start by considering the following formulation of the maximum cut problem: maximize 1 2 (1 ) subject to 1,+1, = 1,, We claim that if we can solve this formulation, then we can solve the MAX CUT problem MAT ApprAl, Spring May
6 Lemma 6.3: The previous program models the maximum cut problem. Proof: Consider the cut = { 1} and = { = +1}. Note that if an edge ) is in this cut, then 1, while if the edge is not in the cut, =1. Thus 1 2 (1 ) gives the weight of all the edges in the cut. Hence finding the setting of the to ±1 that maximizes this sum gives the maximum-weight cut. MAT ApprAl, Spring May Consider now the following vector programming relaxation of the program: maximize 1 2 (1 ) subject to = 1, = 1,,,, = 1,, This is a relaxation: take any feasible solution and produce a feasible solution to this program of the same value by setting = (, 0,0,, 0): clearly =1& for this solution Thus if is the value of an optimal solution to the vector program, it must be that OPT MAT ApprAl, Spring May
7 We can solve the relaxation in polynomial time We would now like to round the solution to obtain a near-optimal cut To do this, we introduce a form of randomized rounding suitable for vector programming We pick a random vector = (,, )by drawing each component from (0,1), the normal distribution with mean 0 and variance 1 (0,1) can be simulated by drawing repeatedly from the uniform distribution on [0,1] Given a solution, iterate through all the vertices and put if 0 and otherwise MAT ApprAl, Spring May Another way of looking at this algorithm is that we consider the hyperplane with normal containing the origin All vectors lie on the unit sphere, since =1and they are unit vectors The hyperplane with normal containing the origin splits the sphere in half; all vertices in one half (the half such that 0) are put into, and all vertices in the other half are put into As we see below, the vector / is uniform over the unit sphere, so this is equivalent to randomly splitting the unit sphere in half MAT ApprAl, Spring May
8 For this reason, this technique is sometimes called choosing a random hyperplane MAT ApprAl, Spring May Fact 6.4: The normalization of, /, is uniformly distributed over the -dimensional unit sphere. Fact 6.5: The projections of onto two unit vectors and are independent and are normally distributed with mean 0 and variance 1 iff and are orthogonal. Corollary 6.6: Let be the projection of onto a two-dimensional plane. Then the normalization of, /, is uniformly distributed on a unit circle in the plane. MAT ApprAl, Spring May
9 Let us begin the proof that choosing a random hyperplane gives a.878-approximation algorithm for the problem We will need the following two lemmas Lemma 6.7: The probability that edge ) is in the cut is arccos. Proof: Let be the projection of onto the plane defined by and. If, then is orthogonal to both and, and + =. Similarly =. MAT ApprAl, Spring May Consider the situation, where line is perpendicular to the vector and line is perpendicular to. By Corollary 6.6, the vector with its tail at the origin is oriented with respect to the vector by an angle chosen uniformly from [0,2). If is to the right of the line, will have a nonnegative inner product with, otherwise not. If is above the line, will have nonnegative inner product with, otherwise not. Thus we have and if is in the sector and and if is in the sector. MAT ApprAl, Spring May
10 If the angle formed by and is radians, then the angles and are also radians. Hence the fraction of values for which, the angle of, corresponds to the event in which ) is in the cut is /2. Thus the probability that ) is in the cut is. We know that = cos. Since and are both unit length vectors, we have that = arccos, which completes the proof of the lemma. MAT ApprAl, Spring May Lemma 6.8: For 1,1], 1 arccos Proof: The proof follows from simple calculus. MAT ApprAl, Spring May
11 Theorem 6.9: Rounding the vector program by choosing a random hyperplane is a.878- approximation algorithm for the max cut problem. Proof: Let be a RV for edge ) s.t. =1if ) is in the cut given by the algorithm, and 0 otherwise. Let be a RV which gives the weight of the cut; that is, =. Then by Lemma 6.7, ]= Pr edge isincut = 1 arccos. MAT ApprAl, Spring May By Lemma 6.8, we can bound each term arccos below by 0.878, so that = OPT. We know that OPT The proof of the theorem above shows that there is a cut of value at least 0.878, so that OPT MAT ApprAl, Spring May
12 Thus we have that OPT OPT It has been shown that there are graphs for which the upper inequality is met with equality This implies that we can get no better performance guarantee for the maximum cut problem by using as an upper bound on OPT Currently,.878 is the best performance guarantee known for MAX CUT Theorem 6.10: If there is an -approximation algorithm for the maximum cut problem with > 0.941, then P = NP. 7-May-15 MAT ApprAl, Spring Theorem 6.11: Given the unique games conjecture there is no -approximation algorithm for the maximum cut problem with constant 1 > unless P = NP. min arccos It is possible to derandomize by a sophisticated application of conditional expectations The derandomization incurs a loss in the performance guarantee that can be made as small as desired (by increasing the running time) MAT ApprAl, Spring May
13 6.3 Approximating quadratic programs We can extend the algorithm above for the MAX CUT to the following more general problem We wish to approximate the quadratic program: maximize subject to 1,+1, = 1,, We need to be careful in this case since it is possible that the value of an optimal solution is negative (e.g., if the values of are negative and all other are zero) MAT ApprAl, Spring May Thus far we have only been considering problems in which all feasible solutions have nonnegative value so that the definition of an approximation algorithm makes sense To see that the definition might not make sense in the case of negative solution values, suppose we have an -approximation algorithm for a maximization problem with <1and we have a problem instance in which OPT is negative Then the approximation algorithm guarantees the value of our solution is at least OPT, which means that the value of our solution will be greater than that of OPT MAT ApprAl, Spring May
14 In order to get around this difficulty, we will restrict the objective function matrix = ( ) to itself be positive semidefinite Observe then that for any feasible solution, the value of the objective function will be and will be nonnegative by the definition of psd matrices As in the case of the maximum cut problem, we can then have the following VP relaxation maximize subject to = 1, = 1,,, = 1,, MAT ApprAl, Spring May Let be the value of an optimal solution for this vector program By the same argument as in the previous section, OPT We can also use the same algorithm as we did for the maximum cut problem We solve the VP in polynomial time and obtain vectors We choose a random hyperplane with normal, and generate a solution for the QP by setting = +1 if 0and 1 otherwise This gives a 2 -approx. algorithm for the QP MAT ApprAl, Spring May
15 Lemma 6.12: = arcsin Proof: Recall from Lemma 6.7 that the probability and will be on different sides of the random hyperplane is arccos. Thus the probability that and have different values is arccos. Observe that if and have different values, then their product must be 1, while if they have the same value, their product must be 1. Thus the probability that the product is 1 must be arccos. MAT ApprAl, Spring May = Pr = 1 Pr 1 = 1 arccos 1 arccos =1 2 arccos. Using arcsin) + arccos) =, we get =1 2 2 arcsin 2 arcsin. MAT ApprAl, Spring May
16 In order to analyze this algorithm, we have to use a global analysis, rather than a term-by-term analysis We then can show the following theorem Theorem 6.16: Rounding the vector program by choosing a random hyperplane is a - approximation algorithm for the quadratic program when the objective function matrix is positive semidefinite. MAT ApprAl, Spring May Finding a correlation clustering In this problem we are given an undirected graph in which each edge is given two nonnegative weights, 0and 0 The goal is to cluster the vertices into sets of similar vertices; the degree to which and are similar is given by and the degree to which they are different is given by We represent a clustering of the vertices by a partition of the vertex set into non-empty subsets MAT ApprAl, Spring May
17 Let ) be the set of edges that have endpoints in different sets of the partition, and let ) be the set of edges that have both endpoints in the same part of the partition Then the goal of the problem is to find a partition that maximizes the total weight of edges inside the sets of the partition plus the total weight of edges between sets of the partition; in other words, we find to maximize ) + ) MAT ApprAl, Spring May It is easy to get a (1/2)-approximation algorithm for this problem If we put all the vertices into a single cluster (that is, = {}), then the value of this solution is If we make each vertex its own cluster (that is, = {{}: }), then the value of this solution is Since OPT, at least one of these two solutions has a value of at least OPT MAT ApprAl, Spring May
18 We can obtain a -approximation algorithm by using semidefinite programming Let be the th unit vector; that is, it has a one in the th coordinate and zeros elsewhere We will have a vector for each vertex ; we set if is in the th cluster The model of the problem then becomes maximize subject to,,,, since = 1 if = and 0 otherwise MAT ApprAl, Spring May We can now relax this model to the following VP: maximize subject to = 1, 0,, Let be the value of an optimal solution for the vector program Observe that the VP is a relaxation, since any feasible solution to the model above is feasible for the VP and has the same value Thus OPT MAT ApprAl, Spring May
19 Previously we chose a random hyperplane to partition the vectors into two sets; in MAX CUT, this gave the two sides of the cut In QP this gave the variables of value +1 and 1 Now we will choose two random hyperplanes, with two independent random vectors and as their normals This partitions the vertices into 2 =4sets: in particular, we partition the vertices into the sets = { 0, 0} = { 0, < 0} = { <0, 0} = { < 0, < 0} MAT ApprAl, Spring May The solution consisting of these four clusters, with =, comes within a factor of 34of the optimal value Lemma 6.17: For [0,1], 1 arccos.75and 1 arccos (1 ).75. MAT ApprAl, Spring May
20 Theorem 6.18: Rounding the VP by using two random hyperplanes as above gives a (34)- approximation algorithm for the correlation clustering problem. Proof: Let be a RV which is 1 if vertices and end up in the same cluster, and is 0 otherwise. Note that the probability that a single random hyperplane has the vectors and on different sides of the hyperplane is arccos by Lemma 6.7. Then the probability that and are on the same side of a single random hyperplane is 1 arccos. MAT ApprAl, Spring May Furthermore, the probability that the vectors are both on the same sides of the two random hyperplanes defined by and is arccos, since and are chosen independently. Thus = arccos. Let be a random variable denoting the weight of the partition. Observe that =. MAT ApprAl, Spring May
21 ]= ]+ ] 1 arccos 1 arccos We now want to use Lemma 6.17 to bound each term in the sum from below; we can do so because the constraints of the VP imply that 0,1. Thus.75 = OPT.. MAT ApprAl, Spring May
Approximation Algorithms
Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A new 4 credit unit course Part of Theoretical Computer Science courses at the Department of Mathematics There will be 4 hours
More informationApproximation Algorithms
Approximation Algorithms Prof. Tapio Elomaa tapio.elomaa@tut.fi Course Basics A 4 credit unit course Part of Theoretical Computer Science courses at the Laboratory of Mathematics There will be 4 hours
More informationLecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix.
CSE525: Randomized Algorithms and Probabilistic Analysis Lecture 9 Lecturer: Anna Karlin Scribe: Sonya Alexandrova and Keith Jia 1 Introduction to semidefinite programming Semidefinite programming is linear
More informationTheorem 2.9: nearest addition algorithm
There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used
More informationCSC Linear Programming and Combinatorial Optimization Lecture 12: Semidefinite Programming(SDP) Relaxation
CSC411 - Linear Programming and Combinatorial Optimization Lecture 1: Semidefinite Programming(SDP) Relaxation Notes taken by Xinwei Gui May 1, 007 Summary: This lecture introduces the semidefinite programming(sdp)
More information1 Unweighted Set Cover
Comp 60: Advanced Algorithms Tufts University, Spring 018 Prof. Lenore Cowen Scribe: Yuelin Liu Lecture 7: Approximation Algorithms: Set Cover and Max Cut 1 Unweighted Set Cover 1.1 Formulations There
More informationWeek 5. Convex Optimization
Week 5. Convex Optimization Lecturer: Prof. Santosh Vempala Scribe: Xin Wang, Zihao Li Feb. 9 and, 206 Week 5. Convex Optimization. The convex optimization formulation A general optimization problem is
More informationRandomized rounding of semidefinite programs and primal-dual method for integer linear programming. Reza Moosavi Dr. Saeedeh Parsaeefard Dec.
Randomized rounding of semidefinite programs and primal-dual method for integer linear programming Dr. Saeedeh Parsaeefard 1 2 3 4 Semidefinite Programming () 1 Integer Programming integer programming
More informationLecture 16: Gaps for Max-Cut
Advanced Approximation Algorithms (CMU 8-854B, Spring 008) Lecture 6: Gaps for Max-Cut Mar 6, 008 Lecturer: Ryan O Donnell Scribe: Ravishankar Krishnaswamy Outline In this lecture, we will discuss algorithmic
More informationApproximation slides 1. An optimal polynomial algorithm for the Vertex Cover and matching in Bipartite graphs
Approximation slides 1 An optimal polynomial algorithm for the Vertex Cover and matching in Bipartite graphs Approximation slides 2 Linear independence A collection of row vectors {v T i } are independent
More informationVertex Cover Approximations
CS124 Lecture 20 Heuristics can be useful in practice, but sometimes we would like to have guarantees. Approximation algorithms give guarantees. It is worth keeping in mind that sometimes approximation
More informationCPSC 536N: Randomized Algorithms Term 2. Lecture 10
CPSC 536N: Randomized Algorithms 011-1 Term Prof. Nick Harvey Lecture 10 University of British Columbia In the first lecture we discussed the Max Cut problem, which is NP-complete, and we presented a very
More informationApproximation Algorithms
Approximation Algorithms Frédéric Giroire FG Simplex 1/11 Motivation Goal: Find good solutions for difficult problems (NP-hard). Be able to quantify the goodness of the given solution. Presentation of
More informationApproximation Algorithms: The Primal-Dual Method. My T. Thai
Approximation Algorithms: The Primal-Dual Method My T. Thai 1 Overview of the Primal-Dual Method Consider the following primal program, called P: min st n c j x j j=1 n a ij x j b i j=1 x j 0 Then the
More informationNotes for Lecture 24
U.C. Berkeley CS170: Intro to CS Theory Handout N24 Professor Luca Trevisan December 4, 2001 Notes for Lecture 24 1 Some NP-complete Numerical Problems 1.1 Subset Sum The Subset Sum problem is defined
More informationGreedy Algorithms 1. For large values of d, brute force search is not feasible because there are 2 d
Greedy Algorithms 1 Simple Knapsack Problem Greedy Algorithms form an important class of algorithmic techniques. We illustrate the idea by applying it to a simplified version of the Knapsack Problem. Informally,
More information/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang
600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization
More informationChapter 4 Concepts from Geometry
Chapter 4 Concepts from Geometry An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Line Segments The line segment between two points and in R n is the set of points on the straight line joining
More information11.1 Facility Location
CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local
More informationMathematical and Algorithmic Foundations Linear Programming and Matchings
Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis
More information/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18
601.433/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18 22.1 Introduction We spent the last two lectures proving that for certain problems, we can
More informationRounding procedures in Semidefinite Programming
Rounding procedures in Semidefinite Programming Philippe Meurdesoif Université Bordeaux 1 & INRIA-Bordeaux CIMINLP, March 19, 2009. 1 Presentation 2 Roundings for MaxCut problems 3 Randomized roundings
More informationLecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)
COMPSCI 632: Approximation Algorithms September 18, 2017 Lecturer: Debmalya Panigrahi Lecture 7 Scribe: Xiang Wang 1 Overview In this lecture, we will use Primal-Dual method to design approximation algorithms
More informationColoring 3-Colorable Graphs
Coloring -Colorable Graphs Charles Jin April, 015 1 Introduction Graph coloring in general is an etremely easy-to-understand yet powerful tool. It has wide-ranging applications from register allocation
More information2 The Fractional Chromatic Gap
C 1 11 2 The Fractional Chromatic Gap As previously noted, for any finite graph. This result follows from the strong duality of linear programs. Since there is no such duality result for infinite linear
More informationMath 414 Lecture 2 Everyone have a laptop?
Math 44 Lecture 2 Everyone have a laptop? THEOREM. Let v,...,v k be k vectors in an n-dimensional space and A = [v ;...; v k ] v,..., v k independent v,..., v k span the space v,..., v k a basis v,...,
More informationOn the Relationships between Zero Forcing Numbers and Certain Graph Coverings
On the Relationships between Zero Forcing Numbers and Certain Graph Coverings Fatemeh Alinaghipour Taklimi, Shaun Fallat 1,, Karen Meagher 2 Department of Mathematics and Statistics, University of Regina,
More informationPolynomial-Time Approximation Algorithms
6.854 Advanced Algorithms Lecture 20: 10/27/2006 Lecturer: David Karger Scribes: Matt Doherty, John Nham, Sergiy Sidenko, David Schultz Polynomial-Time Approximation Algorithms NP-hard problems are a vast
More informationCS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018
CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 2018 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.
More informationTopic: Local Search: Max-Cut, Facility Location Date: 2/13/2007
CS880: Approximations Algorithms Scribe: Chi Man Liu Lecturer: Shuchi Chawla Topic: Local Search: Max-Cut, Facility Location Date: 2/3/2007 In previous lectures we saw how dynamic programming could be
More informationThe Design of Approximation Algorithms
The Design of Approximation Algorithms David P. Williamson Cornell University David B. Shmoys Cornell University m Щ0 CAMBRIDGE UNIVERSITY PRESS Contents Preface page ix I An Introduction to the Techniques
More informationConic Duality. yyye
Conic Linear Optimization and Appl. MS&E314 Lecture Note #02 1 Conic Duality Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/
More informationPrimal-Dual Methods for Approximation Algorithms
Primal-Dual Methods for Approximation Algorithms Nadia Hardy, April 2004 The following is based on: Approximation Algorithms for NP-Hard Problems. D.Hochbaum, ed. Chapter 4: The primal-dual method for
More information4 Integer Linear Programming (ILP)
TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that
More informationThe Simplex Algorithm for LP, and an Open Problem
The Simplex Algorithm for LP, and an Open Problem Linear Programming: General Formulation Inputs: real-valued m x n matrix A, and vectors c in R n and b in R m Output: n-dimensional vector x There is one
More information31.6 Powers of an element
31.6 Powers of an element Just as we often consider the multiples of a given element, modulo, we consider the sequence of powers of, modulo, where :,,,,. modulo Indexing from 0, the 0th value in this sequence
More informationAlgorithm Design and Analysis
Algorithm Design and Analysis LECTURE 29 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/7/2016 Approximation
More informationCSE 548: Analysis of Algorithms. Lecture 13 ( Approximation Algorithms )
CSE 548: Analysis of Algorithms Lecture 13 ( Approximation Algorithms ) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Fall 2017 Approximation Ratio Consider an optimization problem
More informationOutline. CS38 Introduction to Algorithms. Approximation Algorithms. Optimization Problems. Set Cover. Set cover 5/29/2014. coping with intractibility
Outline CS38 Introduction to Algorithms Lecture 18 May 29, 2014 coping with intractibility approximation algorithms set cover TSP center selection randomness in algorithms May 29, 2014 CS38 Lecture 18
More information1 Variations of the Traveling Salesman Problem
Stanford University CS26: Optimization Handout 3 Luca Trevisan January, 20 Lecture 3 In which we prove the equivalence of three versions of the Traveling Salesman Problem, we provide a 2-approximate algorithm,
More informationNotes for Lecture 20
U.C. Berkeley CS170: Intro to CS Theory Handout N20 Professor Luca Trevisan November 13, 2001 Notes for Lecture 20 1 Duality As it turns out, the max-flow min-cut theorem is a special case of a more general
More informationGraph Theory and Optimization Approximation Algorithms
Graph Theory and Optimization Approximation Algorithms Nicolas Nisse Université Côte d Azur, Inria, CNRS, I3S, France October 2018 Thank you to F. Giroire for some of the slides N. Nisse Graph Theory and
More information6.842 Randomness and Computation September 25-27, Lecture 6 & 7. Definition 1 Interactive Proof Systems (IPS) [Goldwasser, Micali, Rackoff]
6.84 Randomness and Computation September 5-7, 017 Lecture 6 & 7 Lecturer: Ronitt Rubinfeld Scribe: Leo de Castro & Kritkorn Karntikoon 1 Interactive Proof Systems An interactive proof system is a protocol
More informationLecture 5: Duality Theory
Lecture 5: Duality Theory Rajat Mittal IIT Kanpur The objective of this lecture note will be to learn duality theory of linear programming. We are planning to answer following questions. What are hyperplane
More informationAssignment 4 Solutions of graph problems
Assignment 4 Solutions of graph problems 1. Let us assume that G is not a cycle. Consider the maximal path in the graph. Let the end points of the path be denoted as v 1, v k respectively. If either of
More informationSteiner Trees and Forests
Massachusetts Institute of Technology Lecturer: Adriana Lopez 18.434: Seminar in Theoretical Computer Science March 7, 2006 Steiner Trees and Forests 1 Steiner Tree Problem Given an undirected graph G
More informationProblem Set 3. MATH 778C, Spring 2009, Austin Mohr (with John Boozer) April 15, 2009
Problem Set 3 MATH 778C, Spring 2009, Austin Mohr (with John Boozer) April 15, 2009 1. Show directly that P 1 (s) P 1 (t) for all t s. Proof. Given G, let H s be a subgraph of G on s vertices such that
More informationStanford University CS359G: Graph Partitioning and Expanders Handout 18 Luca Trevisan March 3, 2011
Stanford University CS359G: Graph Partitioning and Expanders Handout 8 Luca Trevisan March 3, 20 Lecture 8 In which we prove properties of expander graphs. Quasirandomness of Expander Graphs Recall that
More information4 Fractional Dimension of Posets from Trees
57 4 Fractional Dimension of Posets from Trees In this last chapter, we switch gears a little bit, and fractionalize the dimension of posets We start with a few simple definitions to develop the language
More information4 Greedy Approximation for Set Cover
COMPSCI 532: Design and Analysis of Algorithms October 14, 2015 Lecturer: Debmalya Panigrahi Lecture 12 Scribe: Allen Xiao 1 Overview In this lecture, we introduce approximation algorithms and their analysis
More informationCSC2420 Fall 2012: Algorithm Design, Analysis and Theory An introductory (i.e. foundational) level graduate course.
CSC2420 Fall 2012: Algorithm Design, Analysis and Theory An introductory (i.e. foundational) level graduate course. Allan Borodin November 8, 2012; Lecture 9 1 / 24 Brief Announcements 1 Game theory reading
More information3 Fractional Ramsey Numbers
27 3 Fractional Ramsey Numbers Since the definition of Ramsey numbers makes use of the clique number of graphs, we may define fractional Ramsey numbers simply by substituting fractional clique number into
More informationval(y, I) α (9.0.2) α (9.0.3)
CS787: Advanced Algorithms Lecture 9: Approximation Algorithms In this lecture we will discuss some NP-complete optimization problems and give algorithms for solving them that produce a nearly optimal,
More informationProblem Set 1. Solution. CS4234: Optimization Algorithms. Solution Sketches
CS4234: Optimization Algorithms Sketches Problem Set 1 S-1. You are given a graph G = (V, E) with n nodes and m edges. (Perhaps the graph represents a telephone network.) Each edge is colored either blue
More informationApproximability Results for the p-center Problem
Approximability Results for the p-center Problem Stefan Buettcher Course Project Algorithm Design and Analysis Prof. Timothy Chan University of Waterloo, Spring 2004 The p-center
More informationChapter 15 Introduction to Linear Programming
Chapter 15 Introduction to Linear Programming An Introduction to Optimization Spring, 2015 Wei-Ta Chu 1 Brief History of Linear Programming The goal of linear programming is to determine the values of
More informationLecture 8: The Traveling Salesman Problem
Lecture 8: The Traveling Salesman Problem Let G = (V, E) be an undirected graph. A Hamiltonian cycle of G is a cycle that visits every vertex v V exactly once. Instead of Hamiltonian cycle, we sometimes
More information3 No-Wait Job Shops with Variable Processing Times
3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select
More informationDual-fitting analysis of Greedy for Set Cover
Dual-fitting analysis of Greedy for Set Cover We showed earlier that the greedy algorithm for set cover gives a H n approximation We will show that greedy produces a solution of cost at most H n OPT LP
More information22 Elementary Graph Algorithms. There are two standard ways to represent a
VI Graph Algorithms Elementary Graph Algorithms Minimum Spanning Trees Single-Source Shortest Paths All-Pairs Shortest Paths 22 Elementary Graph Algorithms There are two standard ways to represent a graph
More informationCS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36
CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the
More informationLecture 2 September 3
EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give
More informationAlgorithms for Euclidean TSP
This week, paper [2] by Arora. See the slides for figures. See also http://www.cs.princeton.edu/~arora/pubs/arorageo.ps Algorithms for Introduction This lecture is about the polynomial time approximation
More informationSolution for Homework set 3
TTIC 300 and CMSC 37000 Algorithms Winter 07 Solution for Homework set 3 Question (0 points) We are given a directed graph G = (V, E), with two special vertices s and t, and non-negative integral capacities
More informationGraph Theory S 1 I 2 I 1 S 2 I 1 I 2
Graph Theory S I I S S I I S Graphs Definition A graph G is a pair consisting of a vertex set V (G), and an edge set E(G) ( ) V (G). x and y are the endpoints of edge e = {x, y}. They are called adjacent
More informationLinear and Integer Programming :Algorithms in the Real World. Related Optimization Problems. How important is optimization?
Linear and Integer Programming 15-853:Algorithms in the Real World Linear and Integer Programming I Introduction Geometric Interpretation Simplex Method Linear or Integer programming maximize z = c T x
More informationMathematical Programming and Research Methods (Part II)
Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types
More informationIntroduction to Graph Theory
Introduction to Graph Theory Tandy Warnow January 20, 2017 Graphs Tandy Warnow Graphs A graph G = (V, E) is an object that contains a vertex set V and an edge set E. We also write V (G) to denote the vertex
More informationACO Comprehensive Exam October 12 and 13, Computability, Complexity and Algorithms
1. Computability, Complexity and Algorithms Given a simple directed graph G = (V, E), a cycle cover is a set of vertex-disjoint directed cycles that cover all vertices of the graph. 1. Show that there
More informationResearch Interests Optimization:
Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,
More informationFigure 1: An example of a hypercube 1: Given that the source and destination addresses are n-bit vectors, consider the following simple choice of rout
Tail Inequalities Wafi AlBalawi and Ashraf Osman Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV fwafi,osman@csee.wvu.edug 1 Routing in a Parallel Computer
More informationIn this lecture, we ll look at applications of duality to three problems:
Lecture 7 Duality Applications (Part II) In this lecture, we ll look at applications of duality to three problems: 1. Finding maximum spanning trees (MST). We know that Kruskal s algorithm finds this,
More informationLinear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).
Linear Programming Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory: Feasible Set, Vertices, Existence of Solutions. Equivalent formulations. Outline
More informationLinear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.
Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.
More informationIn this lecture we discuss the complexity of approximation problems, and show how to prove they are NP-hard.
In this lecture we discuss the complexity of approximation problems, and show how to prove they are NP-hard. 1 We will show how one can prove such results and then apply this technique to some approximation
More informationLinear Programming in Small Dimensions
Linear Programming in Small Dimensions Lekcija 7 sergio.cabello@fmf.uni-lj.si FMF Univerza v Ljubljani Edited from slides by Antoine Vigneron Outline linear programming, motivation and definition one dimensional
More informationSolutions for the Exam 6 January 2014
Mastermath and LNMB Course: Discrete Optimization Solutions for the Exam 6 January 2014 Utrecht University, Educatorium, 13:30 16:30 The examination lasts 3 hours. Grading will be done before January 20,
More informationUniversal Rigidity of Generic Frameworks. Steven Gortler joint with Dylan Thurston
Universal Rigidity of Generic Frameworks Steven Gortler joint with Dylan Thurston Degrees of Graph Rigidity Framework of G Given a graph: G, v vertices, e edges Framework p: drawing in E d (Don t care
More informationApproximation Algorithms
Approximation Algorithms Given an NP-hard problem, what should be done? Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality.
More informationCOT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748
COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu http://www.cs.fiu.edu/~giri/teach/cot6936_s12.html https://moodle.cis.fiu.edu/v2.1/course/view.php?id=174
More informationTreewidth and graph minors
Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under
More informationAnalysis of an ( )-Approximation Algorithm for the Maximum Edge-Disjoint Paths Problem with Congestion Two
Analysis of an ( )-Approximation Algorithm for the Maximum Edge-Disjoint Paths Problem with Congestion Two Nabiha Asghar Department of Combinatorics & Optimization University of Waterloo, Ontario, Canada
More informationIntroduction to Algorithms / Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14
600.363 Introduction to Algorithms / 600.463 Algorithms I Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/18/14 23.1 Introduction We spent last week proving that for certain problems,
More informationApproximation Algorithms
Approximation Algorithms Group Members: 1. Geng Xue (A0095628R) 2. Cai Jingli (A0095623B) 3. Xing Zhe (A0095644W) 4. Zhu Xiaolu (A0109657W) 5. Wang Zixiao (A0095670X) 6. Jiao Qing (A0095637R) 7. Zhang
More informationLecture 9: Linear Programming
Lecture 9: Linear Programming A common optimization problem involves finding the maximum of a linear function of N variables N Z = a i x i i= 1 (the objective function ) where the x i are all non-negative
More informationIntegral Geometry and the Polynomial Hirsch Conjecture
Integral Geometry and the Polynomial Hirsch Conjecture Jonathan Kelner, MIT Partially based on joint work with Daniel Spielman Introduction n A lot of recent work on Polynomial Hirsch Conjecture has focused
More informationApproximation algorithms for minimum-cost k-(s, T ) connected digraphs
Approximation algorithms for minimum-cost k-(s, T ) connected digraphs J. Cheriyan B. Laekhanukit November 30, 2010 Abstract We introduce a model for NP-hard problems pertaining to the connectivity of
More informationCMPSCI611: The SUBSET-SUM Problem Lecture 18
CMPSCI611: The SUBSET-SUM Problem Lecture 18 We begin today with the problem we didn t get to at the end of last lecture the SUBSET-SUM problem, which we also saw back in Lecture 8. The input to SUBSET-
More informationby conservation of flow, hence the cancelation. Similarly, we have
Chapter 13: Network Flows and Applications Network: directed graph with source S and target T. Non-negative edge weights represent capacities. Assume no edges into S or out of T. (If necessary, we can
More information2010 SMT Power Round
Definitions 2010 SMT Power Round A graph is a collection of points (vertices) connected by line segments (edges). In this test, all graphs will be simple any two vertices will be connected by at most one
More information1. Lecture notes on bipartite matching
Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans February 5, 2017 1. Lecture notes on bipartite matching Matching problems are among the fundamental problems in
More informationConnected Components of Underlying Graphs of Halving Lines
arxiv:1304.5658v1 [math.co] 20 Apr 2013 Connected Components of Underlying Graphs of Halving Lines Tanya Khovanova MIT November 5, 2018 Abstract Dai Yang MIT In this paper we discuss the connected components
More informationLinear Programming Duality and Algorithms
COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover
More informationRepetition: Primal Dual for Set Cover
Repetition: Primal Dual for Set Cover Primal Relaxation: k min i=1 w ix i s.t. u U i:u S i x i 1 i {1,..., k} x i 0 Dual Formulation: max u U y u s.t. i {1,..., k} u:u S i y u w i y u 0 Harald Räcke 428
More informationInteger Programming Theory
Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x
More informationBipartite Roots of Graphs
Bipartite Roots of Graphs Lap Chi Lau Department of Computer Science University of Toronto Graph H is a root of graph G if there exists a positive integer k such that x and y are adjacent in G if and only
More informationProgramming Problem for the 1999 Comprehensive Exam
Programming Problem for the 1999 Comprehensive Exam Out: Monday 1/25/99 at 9:00 am Due: Friday 1/29/99 at 5:00 pm Department of Computer Science Brown University Providence, RI 02912 1.0 The Task This
More informationInterleaving Schemes on Circulant Graphs with Two Offsets
Interleaving Schemes on Circulant raphs with Two Offsets Aleksandrs Slivkins Department of Computer Science Cornell University Ithaca, NY 14853 slivkins@cs.cornell.edu Jehoshua Bruck Department of Electrical
More informationCS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I. Instructor: Shaddin Dughmi
CS599: Convex and Combinatorial Optimization Fall 2013 Lecture 14: Combinatorial Problems as Linear Programs I Instructor: Shaddin Dughmi Announcements Posted solutions to HW1 Today: Combinatorial problems
More informationCS270 Combinatorial Algorithms & Data Structures Spring Lecture 19:
CS270 Combinatorial Algorithms & Data Structures Spring 2003 Lecture 19: 4.1.03 Lecturer: Satish Rao Scribes: Kevin Lacker and Bill Kramer Disclaimer: These notes have not been subjected to the usual scrutiny
More information