Outline. Main Questions. What is the k-means method? What can we do when worst case analysis is too pessimistic?

Size: px
Start display at page:

Download "Outline. Main Questions. What is the k-means method? What can we do when worst case analysis is too pessimistic?"

Transcription

1 Outline Main Questions 1 Data Clustering What is the k-means method? 2 Smoothed Analysis What can we do when worst case analysis is too pessimistic? 3 Smoothed Analysis of k-means Method What is the smoothed complexity of the k-means method? 4 Extensions and Conclusions

2 Data Clustering Input: point set X R d, X = n number of clusters k

3 Data Clustering Input: Output: point set X R d, X = n number of clusters k partition C 1,...,C k centers c 1,...,c k R d

4 Data Clustering Input: Output: Goal: point set X R d, X = n number of clusters k partition C 1,...,C k centers c 1,...,c k R d min k i=1 x C i x c i 2

5 Data Clustering Input: Output: Goal: point set X R d, X = n number of clusters k partition C 1,...,C k centers c 1,...,c k R d min k i=1 x C i x c i 2 Theory: Practice: The problem is NP-hard, but a PTAS exists. (running time is exponential in k) k-means Method.

6 Local Search k-means Method Local search based on two observations: 1. clusters C i fixed centers c i = 1 C i x C i x

7 Local Search k-means Method Local search based on two observations: 1. clusters C i fixed centers c i = 1 C i x C i x

8 Local Search k-means Method Local search based on two observations: 1. clusters C i fixed centers c i = 1 C i x C i x 2. centers c i fixed clusters C i fixed

9 Local Search k-means Method Local search based on two observations: 1. clusters C i fixed centers c i = 1 C i x C i x 2. centers c i fixed clusters C i fixed

10 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

11 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

12 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

13 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

14 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

15 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

16 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

17 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

18 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

19 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

20 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

21 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable

22 k-means Method k-means Method Input: X R d, k 1 Achoose c 1,...,c k R d 2 ARepeat 3 Apartition X into C 1,...,C k 4 Ac i 1 C i x C i x 5 AUntil stable by far the most popular clustering algorithm used in scientific and industrial applications (Berkhin 2002) in practice the number of iterations is generally much less than the number of points (Duda et al. 2001)

23 k-means Method Theoretical Results Running Time Upper Bound: At most (k 2 n) kd iterations. No clustering can occur twice. Lower Bound: At least 2 Ω(k) iterations for d 2. [Andrea Vattani (SoCG 09)]

24 k-means Method Theoretical Results Running Time Upper Bound: At most (k 2 n) kd iterations. No clustering can occur twice. Lower Bound: At least 2 Ω(k) iterations for d 2. [Andrea Vattani (SoCG 09)] Quality Local optima can be arbitrarily bad.

25 k-means Method Theoretical Results Running Time Upper Bound: At most (k 2 n) kd iterations. No clustering can occur twice. Lower Bound: At least 2 Ω(k) iterations for d 2. [Andrea Vattani (SoCG 09)] Quality Local optima can be arbitrarily bad. Huge discrepancy between theory and practice. (Focus of this talk: running time.)

26 Outline Main Questions 1 Data Clustering What is the k-means method? 2 Smoothed Analysis What can we do when worst case analysis is too pessimistic? 3 Smoothed Analysis of k-means Method What is the smoothed complexity of the k-means method? 4 Extensions and Conclusions

27 Smoothed Analysis Worst-case analysis is too pessimistic. Typical instances are not adversarial.

28 Smoothed Analysis Worst-case analysis is too pessimistic. Typical instances are not adversarial. Average-case analysis unrealistic. What is the right probability distribution?

29 Smoothed Analysis Worst-case analysis is too pessimistic. Typical instances are not adversarial. Average-case analysis unrealistic. What is the right probability distribution?

30 Smoothed Analysis Worst-case analysis is too pessimistic. Typical instances are not adversarial. Average-case analysis unrealistic. What is the right probability distribution? Smoothed Analysis Less powerful adversary: 1 Adversary chooses instance I.

31 Smoothed Analysis Worst-case analysis is too pessimistic. Typical instances are not adversarial. Average-case analysis unrealistic. What is the right probability distribution? Smoothed Analysis Less powerful adversary: 1 Adversary chooses instance I. 2 Small amount of random noise is added to I. σ = amount of noise

32 Smoothed Analysis Worst-case analysis is too pessimistic. Typical instances are not adversarial. Average-case analysis unrealistic. What is the right probability distribution? Smoothed Analysis Less powerful adversary: 1 Adversary chooses instance I. 2 Small amount of random noise is added to I. σ = amount of noise T(n,σ) = max E(running time(per σ (I))) I, I =n

33 Smoothed Analysis Worst-case analysis is too pessimistic. Typical instances are not adversarial. Average-case analysis unrealistic. What is the right probability distribution? Smoothed Analysis Less powerful adversary: 1 Adversary chooses instance I. 2 Small amount of random noise is added to I. σ = amount of noise T(n,σ) = max E(running time(per σ (I))) I, I =n models, e.g., measurement errors or numerical imprecision Smoothed compl. low bad performance unlikely in practice

34 Smoothed Analysis of Simplex Method Smoothed Analysis introduced for the Simplex Method: Linear Programming: max c x such that Ax b

35 Smoothed Analysis of Simplex Method Smoothed Analysis introduced for the Simplex Method: Linear Programming: max c x such that Ax b Simplex Method: Follow edges of the polytope. In the worst case slow.

36 Smoothed Analysis of Simplex Method Smoothed Analysis introduced for the Simplex Method: Linear Programming: max c x such that Ax b Simplex Method: Follow edges of the polytope. In the worst case slow.

37 Smoothed Analysis of Simplex Method Smoothed Analysis introduced for the Simplex Method: Linear Programming: max c x such that Ax b Simplex Method: Follow edges of the polytope. In the worst case slow.

38 Smoothed Analysis of Simplex Method Smoothed Analysis introduced for the Simplex Method: Linear Programming: max c x such that Ax b Simplex Method: Follow edges of the polytope. In the worst case slow.

39 Smoothed Analysis of Simplex Method Smoothed Analysis introduced for the Simplex Method: Linear Programming: max c x such that Ax b Simplex Method: Follow edges of the polytope. In the worst case slow.

40 Smoothed Analysis of Simplex Method Smoothed Analysis introduced for the Simplex Method: Linear Programming: max c x such that Ax b Simplex Method: Follow edges of the polytope. In the worst case slow. Perturbation: Add independent Gaussian with standard deviation σ to every entry in matrix A and vector b.

41 Smoothed Analysis of Simplex Method Smoothed Analysis introduced for the Simplex Method: Linear Programming: max c x such that Ax b Simplex Method: Follow edges of the polytope. In the worst case slow. Perturbation: Add independent Gaussian with standard deviation σ to every entry in matrix A and vector b. Theorem [Spielman, Teng 2001] The smoothed running time of the simplex method with shadow vertex pivot rule is polynomial in n, m, and 1/σ.

42 Smoothed Analysis of Simplex Method Smoothed Analysis introduced for the Simplex Method: Linear Programming: max c x such that Ax b Simplex Method: Follow edges of the polytope. In the worst case slow. Perturbation: Add independent Gaussian with standard deviation σ to every entry in matrix A and vector b. Theorem [Spielman, Teng 2001] The smoothed running time of the simplex method with shadow vertex pivot rule is polynomial in n, m, and 1/σ. In 2006, Roman Vershynin improved bound from O ( m 86 n 55 σ 30) to O ( max{n 5 log 2 m, n 9 log 4 n, n 3 σ 4 } ). Further results on condition number of matrices, interior-point method, Gaussian elimination, etc.

43 Outline Main Questions 1 Data Clustering What is the k-means method? 2 Smoothed Analysis What can we do when worst case analysis is too pessimistic? 3 What is the smoothed complexity of the k-means method? 4 Extensions and Conclusions

44 Smoothed Analysis of k-means Model: Every point is perturbed by independent d-dimensional Gaussian with standard deviation σ. T(n,σ) = max E(#Iterations(per σ (X))) X, X =n

45 Smoothed Analysis of k-means Model: Every point is perturbed by independent d-dimensional Gaussian with standard deviation σ. T(n,σ) = max E(#Iterations(per σ (X))) X, X =n Arthur, Vassilvitskii (FOCS 2006) Smoothed number of iterations is at most poly(n k, 1/σ). Manthey, Röglin (SODA 2009) Smoothed number of iterations is at most poly(n k, 1/σ) k kd poly(n, 1/σ)

46 Smoothed Analysis of k-means Model: Every point is perturbed by independent d-dimensional Gaussian with standard deviation σ. T(n,σ) = max E(#Iterations(per σ (X))) X, X =n Arthur, Vassilvitskii (FOCS 2006) Smoothed number of iterations is at most poly(n k, 1/σ). Manthey, Röglin (SODA 2009) Smoothed number of iterations is at most poly(n k, 1/σ) k kd poly(n, 1/σ) Our Result (FOCS 2009) Smoothed number of iterations is poly(n, 1/σ).

47 General Approach 1 Initial Potential: After first iteration whp Φ = k i=1 x C i x c i 2 = O(poly(n))

48 General Approach 1 Initial Potential: After first iteration whp Φ = k i=1 x C i x c i 2 = O(poly(n)) 2 Define smallest possible improvement: = at most O(poly(n)/ ) steps. min (Φ(C) Φ(succ(C))). iteration: C succ(c)

49 General Approach 1 Initial Potential: After first iteration whp Φ = k i=1 x C i x c i 2 = O(poly(n)) 2 Define smallest possible improvement: = at most O(poly(n)/ ) steps. In the worst case: arbitrarily small. min (Φ(C) Φ(succ(C))). iteration: C succ(c)

50 General Approach 1 Initial Potential: After first iteration whp Φ = k i=1 x C i x c i 2 = O(poly(n)) 2 Define smallest possible improvement: = at most O(poly(n)/ ) steps. In the worst case: arbitrarily small. Lemma min (Φ(C) Φ(succ(C))). iteration: C succ(c) For d 2, for every X [0, 1] d, in the model of smoothed analysis: [ ] 1 E = poly(n, 1/σ). max E(#Iterations(per σ (X))) = poly(n, 1/σ). X, X =n

51 When does the potential drop? 1) center moves by ε When does the potential drop? ε improvement by ε 2

52 When does the potential drop? When does the potential drop? 1) center moves by ε 2) point with distance ε to bisector changes assignment ε δ ε improvement by ε 2 improvement by 2εδ

53 When does the potential drop? When does the potential drop? 1) center moves by ε 2) point with distance ε to bisector changes assignment ε δ ε improvement by ε 2 improvement by 2εδ Goal: Show that in every iteration 1 either a center moves significantly 2 or a reassigned point is significantly far from bisector.

54 How large is? Configuration C is ε-bad if Φ(C) Φ(succ(C)) ε. Naive approach: Union Bound over all configurations. Pr[ Configuration C: C is ε-bad] Pr[C is ε-bad] Configuration C

55 How large is? Configuration C is ε-bad if Φ(C) Φ(succ(C)) ε. Naive approach: Union Bound over all configurations. Pr[ Configuration C: C is ε-bad] Pr[C is ε-bad] Problem: Too many configurations: k n. Configuration C

56 How large is? Configuration C is ε-bad if Φ(C) Φ(succ(C)) ε. Naive approach: Union Bound over all configurations. Pr[ Configuration C: C is ε-bad] Pr[C is ε-bad] Problem: Too many configurations: k n. Configuration C space of all configurations

57 How large is? Configuration C is ε-bad if Φ(C) Φ(succ(C)) ε. Naive approach: Union Bound over all configurations. Pr[ Configuration C: C is ε-bad] Pr[C is ε-bad] Problem: Too many configurations: k n. Configuration C space of all configurations F 1 F 2 F 3 F 4 F 5 Pr[ Configuration C: C is ε-bad] 5 Pr[F i ] i=1

58 Transition Graph Transition Graph G = (V, E): V: clusters E: labeled directed edge for each reassigned point C succ(c) G = (V, E)

59 Transition Graph Transition Graph G = (V, E): V: clusters E: labeled directed edge for each reassigned point C succ(c) G = (V, E) Approach: Union bound over different transition blueprints.

60 Transition Graph Transition Graph G = (V, E): V: clusters E: labeled directed edge for each reassigned point C succ(c) G = (V, E) Approach: Union bound over different transition blueprints. First glance: Natural idea. Second glance: Not enough information. E.g.: No information about positions of centers and bisectors.

61 Transition Graph Transition Graph G = (V, E): V: clusters E: labeled directed edge for each reassigned point C succ(c) G = (V, E) Approach: Union bound over different transition blueprints. First glance: Natural idea. Second glance: Not enough information. E.g.: No information about positions of centers and bisectors. Third glance: Enough information!

62 Approximate Centers Goal: Show that in every iteration 1 either a center moves significantly 2 or a reassigned point is significantly far from bisector. B A C (C A)\B approx(a,b) { cm((c A)\B) cm(c) 1 ( }} ){ B cm(b) A cm(a) cm(c) n 2 B A

63 Approximate Centers Goal: Show that in every iteration 1 either a center moves significantly 2 or a reassigned point is significantly far from bisector. B A C (C A)\B approx(a,b) { cm((c A)\B) cm(c) 1 ( }} ){ B cm(b) A cm(a) cm(c) n 2 B A small potential drop cm(c) must be close to approx(a, B)

64 Proof of Main Theorem Theorem Smoothed number of iterations is poly(n, 1/σ).

65 Proof of Main Theorem Theorem Smoothed number of iterations is poly(n, 1/σ). number of blueprints with m edges: (k 2 n) m.

66 Proof of Main Theorem Theorem Smoothed number of iterations is poly(n, 1/σ). number of blueprints with m edges: (k 2 n) m. probability that a fixed data point has distance at mostεfrom its approximate bisector: ε/σ.

67 Proof of Main Theorem Theorem Smoothed number of iterations is poly(n, 1/σ). number of blueprints with m edges: (k 2 n) m. probability that a fixed data point has distance at mostεfrom its approximate bisector: ε/σ. probability that all m data points are ε-close to their approximate bisectors: (ε/σ) m

68 Proof of Main Theorem Theorem Smoothed number of iterations is poly(n, 1/σ). number of blueprints with m edges: (k 2 n) m. probability that a fixed data point has distance at mostεfrom its approximate bisector: ε/σ. probability that all m data points are ε-close to their approximate bisectors: (ε/σ) m Union Bound: Pr[ ε-bad blueprint] (k 2 n) m (ε/σ) m Very unlikely for ε = 1/poly(n, 1/σ).

69 Proof of Main Theorem Theorem Smoothed number of iterations is poly(n, 1/σ). number of blueprints with m edges: (k 2 n) m. probability that a fixed data point has distance at mostεfrom its approximate bisector: ε/σ. probability that all m data points are ε-close to their approximate bisectors: (ε/σ) m Union Bound: Pr[ ε-bad blueprint] (k 2 n) m (ε/σ) m Very unlikely for ε = 1/poly(n, 1/σ). Technical Difficulties: Data points are not independent from approx. bisectors, approximate centers not defined for balanced clusters, blueprints must have enough edges,...

70 Outline Main Questions 1 Data Clustering What is the k-means method? 2 Smoothed Analysis What can we do when worst case analysis is too pessimistic? 3 Smoothed Analysis of k-means Method What is the smoothed complexity of the k-means method? 4 Extensions and Conclusions

71 Kullback-Leibler Divergence Text Classification and Bag-of-Words Model: S set of all words count words and normalize: prob. distribution p: S [0, 1]

72 Kullback-Leibler Divergence Text Classification and Bag-of-Words Model: S set of all words count words and normalize: prob. distribution p: S [0, 1] Kullback-Leibler divergence (relative entropy): KLD(p, q) = d p i log i=1 ( pi = number of bits to encode p with Huffman code for q number of bits to encode p with Huffman code for p q i )

73 Bregman Divergences Bregman divergences are distance measures that generalize squared Euclidean distances and the Kullback-Leibler divergence. Ackermann, Blömer, Sohler (SODA 2008) Approximation Scheme for special cases of Bregman divergences (e.g., Kullback-Leibler divergence).

74 Bregman Divergences Bregman divergences are distance measures that generalize squared Euclidean distances and the Kullback-Leibler divergence. Ackermann, Blömer, Sohler (SODA 2008) Approximation Scheme for special cases of Bregman divergences (e.g., Kullback-Leibler divergence). Manthey, Röglin (ISAAC 2009) For any well-behaved Bregman divergence: Smoothed number of iterations is at most poly(n k, 1/σ) k kd poly(n, 1/σ) poly(n, 1/σ) if d, k = O( log n/ log log n) (or d = 1)

75 Bregman Divergences Bregman divergences are distance measures that generalize squared Euclidean distances and the Kullback-Leibler divergence. Ackermann, Blömer, Sohler (SODA 2008) Approximation Scheme for special cases of Bregman divergences (e.g., Kullback-Leibler divergence). Manthey, Röglin (ISAAC 2009) For any well-behaved Bregman divergence: Smoothed number of iterations is at most poly(n k, 1/σ) k kd poly(n, 1/σ) poly(n, 1/σ) if d, k = O( log n/ log log n) (or d = 1) Polynomial bound does not extend as it uses special properties of Gaussian perturbations.

76 Further Results and Open Questions Summary: Worst-case instances are often fragile. Smoothed Analysis often leads to better understanding of observed practical behavior.

77 Further Results and Open Questions Summary: Worst-case instances are often fragile. Smoothed Analysis often leads to better understanding of observed practical behavior. Future Research: improve exponents for k-means (currently n 30 ) better understanding of dynamics seems necessary for this explanation for good approximation ratio better analysis of Bregman divergences more systematic theory of smoothed local search Are all local search problems in PLS easy in smoothed analysis?

Pearls of Algorithms

Pearls of Algorithms Part 3: Randomized Algorithms and Probabilistic Analysis Prof. Dr. Institut für Informatik Winter 2013/14 Efficient Algorithms When is an algorithm considered efficient? Efficient Algorithms When is an

More information

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid.

More information

CS229 Final Project: k-means Algorithm

CS229 Final Project: k-means Algorithm CS229 Final Project: k-means Algorithm Colin Wei and Alfred Xue SUNet ID: colinwei axue December 11, 2014 1 Notes This project was done in conjuction with a similar project that explored k-means in CS

More information

Pareto Optimal Solutions for Smoothed Analysts

Pareto Optimal Solutions for Smoothed Analysts Pareto Optimal Solutions for Smoothed Analysts Ankur Moitra, IAS joint work with Ryan O Donnell, CMU September 26, 2011 Ankur Moitra (IAS) Pareto September 26, 2011 values weights v1 w 1...... v i w i

More information

Integral Geometry and the Polynomial Hirsch Conjecture

Integral Geometry and the Polynomial Hirsch Conjecture Integral Geometry and the Polynomial Hirsch Conjecture Jonathan Kelner, MIT Partially based on joint work with Daniel Spielman Introduction n A lot of recent work on Polynomial Hirsch Conjecture has focused

More information

Faster Algorithms for the Constrained k-means Problem

Faster Algorithms for the Constrained k-means Problem Faster Algorithms for the Constrained k-means Problem CSE, IIT Delhi June 16, 2015 [Joint work with Anup Bhattacharya (IITD) and Amit Kumar (IITD)] k-means Clustering Problem Problem (k-means) Given n

More information

Clustering: Centroid-Based Partitioning

Clustering: Centroid-Based Partitioning Clustering: Centroid-Based Partitioning Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong 1 / 29 Y Tao Clustering: Centroid-Based Partitioning In this lecture, we

More information

Minimum-Cost Flow Network

Minimum-Cost Flow Network Minimum-Cost Flow Network Smoothed flow network: G = (V,E) balance values: b : V Z costs: c : E R capacities: u : E N 2 1/2 3/1-1 1/3 3/2 3/1 1 3/1 1/2-2 cost/capacity 2/ 17 Smoothed 2 1 1/2 3/1 1 1 11/3

More information

Theoretical Analysis of the k-means Algorithm A Survey

Theoretical Analysis of the k-means Algorithm A Survey Theoretical Analysis of the k-means Algorithm A Survey arxiv:1602.08254v1 [cs.ds] 26 Feb 2016 Johannes Blömer Christiane Lammersen Melanie Schmidt Christian Sohler February 29, 2016 The k-means algorithm

More information

Clustering. Unsupervised Learning

Clustering. Unsupervised Learning Clustering. Unsupervised Learning Maria-Florina Balcan 04/06/2015 Reading: Chapter 14.3: Hastie, Tibshirani, Friedman. Additional resources: Center Based Clustering: A Foundational Perspective. Awasthi,

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Lecture Notes 2: The Simplex Algorithm

Lecture Notes 2: The Simplex Algorithm Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved

More information

Clustering. (Part 2)

Clustering. (Part 2) Clustering (Part 2) 1 k-means clustering 2 General Observations on k-means clustering In essence, k-means clustering aims at minimizing cluster variance. It is typically used in Euclidean spaces and works

More information

The Complexity of the k-means Method

The Complexity of the k-means Method The Complexity of the k-means Method Tim Roughgarden and Joshua R. Wang 2 Department of Computer Science, Stanford Univeresity, 474 Gates Building, 353 Serra Mall, Stanford, CA 9435, USA tim@cs.stanford.edu

More information

Outline. CS38 Introduction to Algorithms. Approximation Algorithms. Optimization Problems. Set Cover. Set cover 5/29/2014. coping with intractibility

Outline. CS38 Introduction to Algorithms. Approximation Algorithms. Optimization Problems. Set Cover. Set cover 5/29/2014. coping with intractibility Outline CS38 Introduction to Algorithms Lecture 18 May 29, 2014 coping with intractibility approximation algorithms set cover TSP center selection randomness in algorithms May 29, 2014 CS38 Lecture 18

More information

k-center Problems Joey Durham Graphs, Combinatorics and Convex Optimization Reading Group Summer 2008

k-center Problems Joey Durham Graphs, Combinatorics and Convex Optimization Reading Group Summer 2008 k-center Problems Joey Durham Graphs, Combinatorics and Convex Optimization Reading Group Summer 2008 Outline General problem definition Several specific examples k-center, k-means, k-mediod Approximation

More information

The Simplex Algorithm

The Simplex Algorithm The Simplex Algorithm Uri Feige November 2011 1 The simplex algorithm The simplex algorithm was designed by Danzig in 1947. This write-up presents the main ideas involved. It is a slight update (mostly

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018

More information

The Touring Polygons Problem (TPP)

The Touring Polygons Problem (TPP) The Touring Polygons Problem (TPP) [Dror-Efrat-Lubiw-M]: Given a sequence of k polygons in the plane, a start point s, and a target point, t, we seek a shortest path that starts at s, visits in order each

More information

Discrete Optimization. Lecture Notes 2

Discrete Optimization. Lecture Notes 2 Discrete Optimization. Lecture Notes 2 Disjunctive Constraints Defining variables and formulating linear constraints can be straightforward or more sophisticated, depending on the problem structure. The

More information

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions

More information

Algorithm Design and Analysis

Algorithm Design and Analysis Algorithm Design and Analysis LECTURE 29 Approximation Algorithms Load Balancing Weighted Vertex Cover Reminder: Fill out SRTEs online Don t forget to click submit Sofya Raskhodnikova 12/7/2016 Approximation

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Given an NP-hard problem, what should be done? Theory says you're unlikely to find a poly-time algorithm. Must sacrifice one of three desired features. Solve problem to optimality.

More information

NP Completeness. Andreas Klappenecker [partially based on slides by Jennifer Welch]

NP Completeness. Andreas Klappenecker [partially based on slides by Jennifer Welch] NP Completeness Andreas Klappenecker [partially based on slides by Jennifer Welch] Dealing with NP-Complete Problems Dealing with NP-Completeness Suppose the problem you need to solve is NP-complete. What

More information

Approximation Algorithms

Approximation Algorithms Chapter 8 Approximation Algorithms Algorithm Theory WS 2016/17 Fabian Kuhn Approximation Algorithms Optimization appears everywhere in computer science We have seen many examples, e.g.: scheduling jobs

More information

A provably efficient simplex algorithm for classification

A provably efficient simplex algorithm for classification A provably efficient simplex algorithm for classification Elad Hazan Technion - Israel Inst. of Tech. Haifa, 32000 ehazan@ie.technion.ac.il Zohar Karnin Yahoo! Research Haifa zkarnin@ymail.com Abstract

More information

THEORY OF LINEAR AND INTEGER PROGRAMMING

THEORY OF LINEAR AND INTEGER PROGRAMMING THEORY OF LINEAR AND INTEGER PROGRAMMING ALEXANDER SCHRIJVER Centrum voor Wiskunde en Informatica, Amsterdam A Wiley-Inter science Publication JOHN WILEY & SONS^ Chichester New York Weinheim Brisbane Singapore

More information

Applying the weighted barycentre method to interactive graph visualization

Applying the weighted barycentre method to interactive graph visualization Applying the weighted barycentre method to interactive graph visualization Peter Eades University of Sydney Thanks for some software: Hooman Reisi Dekhordi Patrick Eades Graphs and Graph Drawings What

More information

Machine Learning and Pervasive Computing

Machine Learning and Pervasive Computing Stephan Sigg Georg-August-University Goettingen, Computer Networks 17.12.2014 Overview and Structure 22.10.2014 Organisation 22.10.3014 Introduction (Def.: Machine learning, Supervised/Unsupervised, Examples)

More information

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage).

Linear Programming. Readings: Read text section 11.6, and sections 1 and 2 of Tom Ferguson s notes (see course homepage). Linear Programming Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory: Feasible Set, Vertices, Existence of Solutions. Equivalent formulations. Outline

More information

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued.

Linear Programming. Widget Factory Example. Linear Programming: Standard Form. Widget Factory Example: Continued. Linear Programming Widget Factory Example Learning Goals. Introduce Linear Programming Problems. Widget Example, Graphical Solution. Basic Theory:, Vertices, Existence of Solutions. Equivalent formulations.

More information

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748 COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu http://www.cs.fiu.edu/~giri/teach/cot6936_s12.html https://moodle.cis.fiu.edu/v2.1/course/view.php?id=174

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Lecture 16 October 23, 2014

Lecture 16 October 23, 2014 CS 224: Advanced Algorithms Fall 2014 Prof. Jelani Nelson Lecture 16 October 23, 2014 Scribe: Colin Lu 1 Overview In the last lecture we explored the simplex algorithm for solving linear programs. While

More information

Non-exhaustive, Overlapping k-means

Non-exhaustive, Overlapping k-means Non-exhaustive, Overlapping k-means J. J. Whang, I. S. Dhilon, and D. F. Gleich Teresa Lebair University of Maryland, Baltimore County October 29th, 2015 Teresa Lebair UMBC 1/38 Outline Introduction NEO-K-Means

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

More information

An iteration of the simplex method (a pivot )

An iteration of the simplex method (a pivot ) Recap, and outline of Lecture 13 Previously Developed and justified all the steps in a typical iteration ( pivot ) of the Simplex Method (see next page). Today Simplex Method Initialization Start with

More information

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm

LECTURE 6: INTERIOR POINT METHOD. 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm LECTURE 6: INTERIOR POINT METHOD 1. Motivation 2. Basic concepts 3. Primal affine scaling algorithm 4. Dual affine scaling algorithm Motivation Simplex method works well in general, but suffers from exponential-time

More information

Parallel Algorithms for Geometric Graph Problems Grigory Yaroslavtsev

Parallel Algorithms for Geometric Graph Problems Grigory Yaroslavtsev Parallel Algorithms for Geometric Graph Problems Grigory Yaroslavtsev http://grigory.us Appeared in STOC 2014, joint work with Alexandr Andoni, Krzysztof Onak and Aleksandar Nikolov. The Big Data Theory

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Bottleneck Steiner Tree with Bounded Number of Steiner Vertices

Bottleneck Steiner Tree with Bounded Number of Steiner Vertices Bottleneck Steiner Tree with Bounded Number of Steiner Vertices A. Karim Abu-Affash Paz Carmi Matthew J. Katz June 18, 2011 Abstract Given a complete graph G = (V, E), where each vertex is labeled either

More information

The simplex method and the diameter of a 0-1 polytope

The simplex method and the diameter of a 0-1 polytope The simplex method and the diameter of a 0-1 polytope Tomonari Kitahara and Shinji Mizuno May 2012 Abstract We will derive two main results related to the primal simplex method for an LP on a 0-1 polytope.

More information

Circuit Walks in Integral Polyhedra

Circuit Walks in Integral Polyhedra Circuit Walks in Integral Polyhedra Charles Viss Steffen Borgwardt University of Colorado Denver Optimization and Discrete Geometry: Theory and Practice Tel Aviv University, April 2018 LINEAR PROGRAMMING

More information

How Hard Is Inference for Structured Prediction?

How Hard Is Inference for Structured Prediction? How Hard Is Inference for Structured Prediction? Tim Roughgarden (Stanford University) joint work with Amir Globerson (Tel Aviv), David Sontag (NYU), and Cafer Yildirum (NYU) 1 Structured Prediction structured

More information

Problem 1: Complexity of Update Rules for Logistic Regression

Problem 1: Complexity of Update Rules for Logistic Regression Case Study 1: Estimating Click Probabilities Tackling an Unknown Number of Features with Sketching Machine Learning for Big Data CSE547/STAT548, University of Washington Emily Fox January 16 th, 2014 1

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

Smoothed Analysis of the Minimum-Mean Cycle Canceling Algorithm and the Network Simplex Algorithm

Smoothed Analysis of the Minimum-Mean Cycle Canceling Algorithm and the Network Simplex Algorithm Smoothed Analysis of the Minimum-Mean Cycle Canceling Algorithm and the Network Simplex Algorithm Kamiel Cornelissen and Bodo Manthey University of Twente, Enschede, The Netherlands k.cornelissen@utwente.nl,

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS Coping with NP-completeness 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: weighted vertex cover LP rounding: weighted vertex cover generalized load balancing knapsack problem

More information

Polyhedral Compilation Foundations

Polyhedral Compilation Foundations Polyhedral Compilation Foundations Louis-Noël Pouchet pouchet@cse.ohio-state.edu Dept. of Computer Science and Engineering, the Ohio State University Feb 15, 2010 888.11, Class #4 Introduction: Polyhedral

More information

Spectral Clustering X I AO ZE N G + E L HA M TA BA S SI CS E CL A S S P R ESENTATION MA RCH 1 6,

Spectral Clustering X I AO ZE N G + E L HA M TA BA S SI CS E CL A S S P R ESENTATION MA RCH 1 6, Spectral Clustering XIAO ZENG + ELHAM TABASSI CSE 902 CLASS PRESENTATION MARCH 16, 2017 1 Presentation based on 1. Von Luxburg, Ulrike. "A tutorial on spectral clustering." Statistics and computing 17.4

More information

Diffusion and Clustering on Large Graphs

Diffusion and Clustering on Large Graphs Diffusion and Clustering on Large Graphs Alexander Tsiatas Final Defense 17 May 2012 Introduction Graphs are omnipresent in the real world both natural and man-made Examples of large graphs: The World

More information

Welcome to the course Algorithm Design

Welcome to the course Algorithm Design Welcome to the course Algorithm Design Summer Term 2011 Friedhelm Meyer auf der Heide Lecture 13, 15.7.2011 Friedhelm Meyer auf der Heide 1 Topics - Divide & conquer - Dynamic programming - Greedy Algorithms

More information

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748 COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu http://www.cs.fiu.edu/~giri/teach/cot6936_s12.html https://moodle.cis.fiu.edu/v2.1/course/view.php?id=174

More information

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used.

4.12 Generalization. In back-propagation learning, as many training examples as possible are typically used. 1 4.12 Generalization In back-propagation learning, as many training examples as possible are typically used. It is hoped that the network so designed generalizes well. A network generalizes well when

More information

The 4/3 Additive Spanner Exponent is Tight

The 4/3 Additive Spanner Exponent is Tight The 4/3 Additive Spanner Exponent is Tight Amir Abboud and Greg Bodwin Stanford University June 8, 2016 Our Question How much can you compress a graph into low space while still being able to approximately

More information

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018 CS 580: Algorithm Design and Analysis Jeremiah Blocki Purdue University Spring 2018 Chapter 11 Approximation Algorithms Slides by Kevin Wayne. Copyright @ 2005 Pearson-Addison Wesley. All rights reserved.

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 50 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 50 CS 473: Algorithms, Spring 2018 Introduction to Linear Programming Lecture 18 March

More information

Approximation-stability and proxy objectives

Approximation-stability and proxy objectives Harnessing implicit assumptions in problem formulations: Approximation-stability and proxy objectives Avrim Blum Carnegie Mellon University Based on work joint with Pranjal Awasthi, Nina Balcan, Anupam

More information

Clustering. Unsupervised Learning

Clustering. Unsupervised Learning Clustering. Unsupervised Learning Maria-Florina Balcan 03/02/2016 Clustering, Informal Goals Goal: Automatically partition unlabeled data into groups of similar datapoints. Question: When and why would

More information

The Simplex Algorithm for LP, and an Open Problem

The Simplex Algorithm for LP, and an Open Problem The Simplex Algorithm for LP, and an Open Problem Linear Programming: General Formulation Inputs: real-valued m x n matrix A, and vectors c in R n and b in R m Output: n-dimensional vector x There is one

More information

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 600.469 / 600.669 Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang 9.1 Linear Programming Suppose we are trying to approximate a minimization

More information

Approximate Clustering with Same-Cluster Queries

Approximate Clustering with Same-Cluster Queries Approximate Clustering with Same-Cluster Queries Nir Ailon 1, Anup Bhattacharya 2, Ragesh Jaiswal 3, and Amit Kumar 4 1 Technion, Haifa, Israel nailon@cs.technion.ac.il 2 Department of Computer Science

More information

Models of distributed computing: port numbering and local algorithms

Models of distributed computing: port numbering and local algorithms Models of distributed computing: port numbering and local algorithms Jukka Suomela Adaptive Computing Group Helsinki Institute for Information Technology HIIT University of Helsinki FMT seminar, 26 February

More information

Approximation Algorithms for Clustering Uncertain Data

Approximation Algorithms for Clustering Uncertain Data Approximation Algorithms for Clustering Uncertain Data Graham Cormode AT&T Labs - Research graham@research.att.com Andrew McGregor UCSD / MSR / UMass Amherst andrewm@ucsd.edu Introduction Many applications

More information

(67686) Mathematical Foundations of AI July 30, Lecture 11

(67686) Mathematical Foundations of AI July 30, Lecture 11 (67686) Mathematical Foundations of AI July 30, 2008 Lecturer: Ariel D. Procaccia Lecture 11 Scribe: Michael Zuckerman and Na ama Zohary 1 Cooperative Games N = {1,...,n} is the set of players (agents).

More information

Great Theoretical Ideas in Computer Science

Great Theoretical Ideas in Computer Science 15-251 Great Theoretical Ideas in Computer Science Lecture 20: Randomized Algorithms November 5th, 2015 So far Formalization of computation/algorithm Computability / Uncomputability Computational complexity

More information

Smoothed Analysis: An Attempt to Explain the Behavior of Algorithms in Practice

Smoothed Analysis: An Attempt to Explain the Behavior of Algorithms in Practice Smoothed Analysis: An Attempt to Explain the Behavior of Algorithms in Practice Daniel A. Spielman Department of Computer Science Yale University spielman@cs.yale.edu Shang-Hua Teng Department of Computer

More information

11.1 Facility Location

11.1 Facility Location CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

More information

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize.

Lecture notes on the simplex method September We will present an algorithm to solve linear programs of the form. maximize. Cornell University, Fall 2017 CS 6820: Algorithms Lecture notes on the simplex method September 2017 1 The Simplex Method We will present an algorithm to solve linear programs of the form maximize subject

More information

INTERNATIONAL INSTITUTE OF MANAGEMENT, ENGINEERING & TECHNOLOGY, JAIPUR (IIMET)

INTERNATIONAL INSTITUTE OF MANAGEMENT, ENGINEERING & TECHNOLOGY, JAIPUR (IIMET) INTERNATIONAL INSTITUTE OF MANAGEMENT, ENGINEERING & TECHNOLOGY, JAIPUR (IIMET) UNIT-1 Q.1 What is the significance of using notations in analysis of algorithm? Explain the various notations in brief?

More information

arxiv: v1 [cs.ds] 20 Dec 2014

arxiv: v1 [cs.ds] 20 Dec 2014 On the Shadow Simplex method for curved polyhedra Daniel Dadush 1 and Nicolai Hähnle 2 1 Centrum Wiskunde & Informatica, Netherlands dadush@cwi.nl 2 Universität Bonn, Germany haehnle@or.uni-bonn.de arxiv:1412.675v1

More information

Lecture Notes on Karger s Min-Cut Algorithm. Eric Vigoda Georgia Institute of Technology Last updated for Advanced Algorithms, Fall 2013.

Lecture Notes on Karger s Min-Cut Algorithm. Eric Vigoda Georgia Institute of Technology Last updated for Advanced Algorithms, Fall 2013. Lecture Notes on Karger s Min-Cut Algorithm. Eric Vigoda Georgia Institute of Technology Last updated for 4540 - Advanced Algorithms, Fall 2013. Today s topic: Karger s min-cut algorithm [3]. Problem Definition

More information

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar Submodularity Reading Group Matroid Polytopes, Polymatroid M. Pawan Kumar http://www.robots.ox.ac.uk/~oval/ Outline Linear Programming Matroid Polytopes Polymatroid Polyhedron Ax b A : m x n matrix b:

More information

Clustering. Unsupervised Learning

Clustering. Unsupervised Learning Clustering. Unsupervised Learning Maria-Florina Balcan 11/05/2018 Clustering, Informal Goals Goal: Automatically partition unlabeled data into groups of similar datapoints. Question: When and why would

More information

Copyright 2000, Kevin Wayne 1

Copyright 2000, Kevin Wayne 1 Guessing Game: NP-Complete? 1. LONGEST-PATH: Given a graph G = (V, E), does there exists a simple path of length at least k edges? YES. SHORTEST-PATH: Given a graph G = (V, E), does there exists a simple

More information

W[1]-hardness. Dániel Marx 1. Hungarian Academy of Sciences (MTA SZTAKI) Budapest, Hungary

W[1]-hardness. Dániel Marx 1. Hungarian Academy of Sciences (MTA SZTAKI) Budapest, Hungary W[1]-hardness Dániel Marx 1 1 Institute for Computer Science and Control, Hungarian Academy of Sciences (MTA SZTAKI) Budapest, Hungary School on Parameterized Algorithms and Complexity Będlewo, Poland

More information

Learning Models of Similarity: Metric and Kernel Learning. Eric Heim, University of Pittsburgh

Learning Models of Similarity: Metric and Kernel Learning. Eric Heim, University of Pittsburgh Learning Models of Similarity: Metric and Kernel Learning Eric Heim, University of Pittsburgh Standard Machine Learning Pipeline Manually-Tuned Features Machine Learning Model Desired Output for Task Features

More information

Combinatorial Geometry & Approximation Algorithms

Combinatorial Geometry & Approximation Algorithms Combinatorial Geometry & Approximation Algorithms Timothy Chan U. of Waterloo PROLOGUE Analysis of Approx Factor in Analysis of Runtime in Computational Geometry Combinatorial Geometry Problem 1: Geometric

More information

Linear Programming and Clustering

Linear Programming and Clustering and Advisor: Dr. Leonard Schulman, Caltech Aditya Huddedar IIT Kanpur Advisor: Dr. Leonard Schulman, Caltech Aditya Huddedar IIT Kanpur and Outline of Talk 1 Introduction 2 Motivation 3 Our Approach 4

More information

11. APPROXIMATION ALGORITHMS

11. APPROXIMATION ALGORITHMS 11. APPROXIMATION ALGORITHMS load balancing center selection pricing method: vertex cover LP rounding: vertex cover generalized load balancing knapsack problem Lecture slides by Kevin Wayne Copyright 2005

More information

Computational complexity

Computational complexity Computational complexity Heuristic Algorithms Giovanni Righini University of Milan Department of Computer Science (Crema) Definitions: problems and instances A problem is a general question expressed in

More information

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch.

Copyright 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin Introduction to the Design & Analysis of Algorithms, 2 nd ed., Ch. Iterative Improvement Algorithm design technique for solving optimization problems Start with a feasible solution Repeat the following step until no improvement can be found: change the current feasible

More information

Introduction to Randomized Algorithms

Introduction to Randomized Algorithms Introduction to Randomized Algorithms Gopinath Mishra Advanced Computing and Microelectronics Unit Indian Statistical Institute Kolkata 700108, India. Organization 1 Introduction 2 Some basic ideas from

More information

Algorithmic Game Theory - Introduction, Complexity, Nash

Algorithmic Game Theory - Introduction, Complexity, Nash Algorithmic Game Theory - Introduction, Complexity, Nash Branislav Bošanský Czech Technical University in Prague branislav.bosansky@agents.fel.cvut.cz February 25, 2018 About This Course main topics of

More information

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015 15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015 While we have good algorithms for many optimization problems, the previous lecture showed that many

More information

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3) COMPSCI 632: Approximation Algorithms September 18, 2017 Lecturer: Debmalya Panigrahi Lecture 7 Scribe: Xiang Wang 1 Overview In this lecture, we will use Primal-Dual method to design approximation algorithms

More information

Partial vs. Complete Domination:

Partial vs. Complete Domination: Partial vs. Complete Domination: t-dominating SET Joachim Kneis Daniel Mölle Peter Rossmanith Theory Group, RWTH Aachen University January 22, 2007 Complete vs. Partial Solutions VERTEX COVER (VC): DOMINATING

More information

CS321 Introduction To Numerical Methods

CS321 Introduction To Numerical Methods CS3 Introduction To Numerical Methods Fuhua (Frank) Cheng Department of Computer Science University of Kentucky Lexington KY 456-46 - - Table of Contents Errors and Number Representations 3 Error Types

More information

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017

CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017 CMSC 451: Lecture 22 Approximation Algorithms: Vertex Cover and TSP Tuesday, Dec 5, 2017 Reading: Section 9.2 of DPV. Section 11.3 of KT presents a different approximation algorithm for Vertex Cover. Coping

More information

The Ordered Covering Problem

The Ordered Covering Problem The Ordered Covering Problem Uriel Feige Yael Hitron November 8, 2016 Abstract We introduce the Ordered Covering (OC) problem. The input is a finite set of n elements X, a color function c : X {0, 1} and

More information

SOME PROBLEMS IN ASYMPTOTIC CONVEX GEOMETRY AND RANDOM MATRICES MOTIVATED BY NUMERICAL ALGORITHMS

SOME PROBLEMS IN ASYMPTOTIC CONVEX GEOMETRY AND RANDOM MATRICES MOTIVATED BY NUMERICAL ALGORITHMS SOME PROBLEMS IN ASYMPTOTIC CONVEX GEOMETRY AND RANDOM MATRICES MOTIVATED BY NUMERICAL ALGORITHMS ROMAN VERSHYNIN Abstract. The simplex method in Linear Programming motivates several problems of asymptotic

More information

Theorem 2.9: nearest addition algorithm

Theorem 2.9: nearest addition algorithm There are severe limits on our ability to compute near-optimal tours It is NP-complete to decide whether a given undirected =(,)has a Hamiltonian cycle An approximation algorithm for the TSP can be used

More information

Subexponential lower bounds for randomized pivoting rules for the simplex algorithm

Subexponential lower bounds for randomized pivoting rules for the simplex algorithm Subexponential lower bounds for randomized pivoting rules for the simplex algorithm Oliver Friedmann 1 Thomas Dueholm Hansen 2 Uri Zwick 3 1 Department of Computer Science, University of Munich, Germany.

More information

Big Data Analytics. Special Topics for Computer Science CSE CSE Feb 11

Big Data Analytics. Special Topics for Computer Science CSE CSE Feb 11 Big Data Analytics Special Topics for Computer Science CSE 4095-001 CSE 5095-005 Feb 11 Fei Wang Associate Professor Department of Computer Science and Engineering fei_wang@uconn.edu Clustering II Spectral

More information

Smoothed Motion Complexity

Smoothed Motion Complexity Smoothed Motion Complexity Valentina Damerow 1, Friedhelm Meyer auf der Heide 2, Harald Räcke 2, Christian Scheideler 3, and Christian Sohler 2 1 PaSCo Graduate School and 2 Heinz Nixdorf Institute and

More information

Who s The Weakest Link?

Who s The Weakest Link? Who s The Weakest Link? Nikhil Devanur, Richard J. Lipton, and Nisheeth K. Vishnoi {nikhil,rjl,nkv}@cc.gatech.edu Georgia Institute of Technology, Atlanta, GA 30332, USA. Abstract. In this paper we consider

More information

Prove, where is known to be NP-complete. The following problems are NP-Complete:

Prove, where is known to be NP-complete. The following problems are NP-Complete: CMPSCI 601: Recall From Last Time Lecture 21 To prove is NP-complete: Prove NP. Prove, where is known to be NP-complete. The following problems are NP-Complete: SAT (Cook-Levin Theorem) 3-SAT 3-COLOR CLIQUE

More information

ACTUALLY DOING IT : an Introduction to Polyhedral Computation

ACTUALLY DOING IT : an Introduction to Polyhedral Computation ACTUALLY DOING IT : an Introduction to Polyhedral Computation Jesús A. De Loera Department of Mathematics Univ. of California, Davis http://www.math.ucdavis.edu/ deloera/ 1 What is a Convex Polytope? 2

More information

February 2017 (1/20) 2 Piecewise Polynomial Interpolation 2.2 (Natural) Cubic Splines. MA378/531 Numerical Analysis II ( NA2 )

February 2017 (1/20) 2 Piecewise Polynomial Interpolation 2.2 (Natural) Cubic Splines. MA378/531 Numerical Analysis II ( NA2 ) f f f f f (/2).9.8.7.6.5.4.3.2. S Knots.7.6.5.4.3.2. 5 5.2.8.6.4.2 S Knots.2 5 5.9.8.7.6.5.4.3.2..9.8.7.6.5.4.3.2. S Knots 5 5 S Knots 5 5 5 5.35.3.25.2.5..5 5 5.6.5.4.3.2. 5 5 4 x 3 3.5 3 2.5 2.5.5 5

More information

Newton Polygons of L-Functions

Newton Polygons of L-Functions Newton Polygons of L-Functions Phong Le Department of Mathematics University of California, Irvine June 2009/Ph.D. Defense Phong Le Newton Polygons of L-Functions 1/33 Laurent Polynomials Let q = p a where

More information