INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

Similar documents
INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator

INDIAN STATISTICAL INSTITUTE

Solutions for the Exam 6 January 2014

Mathematics Masters Examination

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

CSC 505, Fall 2000: Week 12

Algorithm Design and Analysis

Midterm 2. Friday, August 2, 5:10pm 7:10pm CS 70: Discrete Mathematics and Probability Theory, Summer 2013

11. APPROXIMATION ALGORITHMS

Approximation Algorithms

CS261: Problem Set #1

Methods and Models for Combinatorial Optimization Exact methods for the Traveling Salesman Problem

Theorem 2.9: nearest addition algorithm

Notes for Lecture 24

Graph Theory and Optimization Approximation Algorithms

CS 580: Algorithm Design and Analysis. Jeremiah Blocki Purdue University Spring 2018

15-451/651: Design & Analysis of Algorithms October 11, 2018 Lecture #13: Linear Programming I last changed: October 9, 2018

1 Linear programming relaxation

COMP3121/3821/9101/ s1 Assignment 1

Stanford University CS261: Optimization Handout 1 Luca Trevisan January 4, 2011

Discrete Mathematics Course Review 3

Introduction to Approximation Algorithms

UNIVERSITY OF TORONTO Department of Computer Science April 2014 Final Exam CSC373H1S Robert Robere Duration - 3 hours No Aids Allowed.

Algorithms, Spring 2014, CSE, OSU Greedy algorithms II. Instructor: Anastasios Sidiropoulos

Approximation Algorithms: The Primal-Dual Method. My T. Thai

CS264: Homework #4. Due by midnight on Wednesday, October 22, 2014

Exam in Algorithms & Data Structures 3 (1DL481)

11. APPROXIMATION ALGORITHMS

1 Unweighted Set Cover

CME 305: Discrete Mathematics and Algorithms Instructor: Reza Zadeh HW#3 Due at the beginning of class Thursday 02/26/15

15-451/651: Design & Analysis of Algorithms November 4, 2015 Lecture #18 last changed: November 22, 2015

CPS 102: Discrete Mathematics. Quiz 3 Date: Wednesday November 30, Instructor: Bruce Maggs NAME: Prob # Score. Total 60

1. [1 pt] What is the solution to the recurrence T(n) = 2T(n-1) + 1, T(1) = 1

Models of distributed computing: port numbering and local algorithms

Dr. Amotz Bar-Noy s Compendium of Algorithms Problems. Problems, Hints, and Solutions

The k-center problem Approximation Algorithms 2009 Petros Potikas

Problem Score Maximum MC 34 (25/17) = 50 Total 100

ALGORITHMS EXAMINATION Department of Computer Science New York University December 17, 2007

Definition: A graph G = (V, E) is called a tree if G is connected and acyclic. The following theorem captures many important facts about trees.

Topic: Local Search: Max-Cut, Facility Location Date: 2/13/2007

Questions Total Points Score

Lecture 2 The k-means clustering problem

A List Heuristic for Vertex Cover

Homework 2. Sample Solution. Due Date: Thursday, May 31, 11:59 pm

2 The Fractional Chromatic Gap

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CMPSCI611: Approximating SET-COVER Lecture 21

CS210 (161) with Dr. Basit Qureshi Final Exam Weight 40%

Approximation Algorithms

11. APPROXIMATION ALGORITHMS

4 Greedy Approximation for Set Cover

Assignment 5: Solutions

Approximation Algorithms

CSC Linear Programming and Combinatorial Optimization Lecture 12: Semidefinite Programming(SDP) Relaxation

9. Graph Isomorphism and the Lasserre Hierarchy.

12.1 Formulation of General Perfect Matching

/ Approximation Algorithms Lecturer: Michael Dinitz Topic: Linear Programming Date: 2/24/15 Scribe: Runze Tang

1 The Traveling Salesperson Problem (TSP)

INDIAN STATISTICAL INSTITUTE

Approximation Algorithms

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: Approximation algorithms Date: 11/27/18

Vertex Cover Approximations

Lecture 25 : Counting DRAFT

Maximal Independent Set

1 Better Approximation of the Traveling Salesman

CS134 Spring 2005 Final Exam Mon. June. 20, 2005 Signature: Question # Out Of Marks Marker Total

6 Randomized rounding of semidefinite programs

Hashing. Yufei Tao. Department of Computer Science and Engineering Chinese University of Hong Kong

1 Minimum Cut Problem

Introduction to Algorithms October 12, 2005 Massachusetts Institute of Technology Professors Erik D. Demaine and Charles E. Leiserson Quiz 1.

Questions Total Points Score

COMP Analysis of Algorithms & Data Structures

Part II. Graph Theory. Year

1 Variations of the Traveling Salesman Problem

5. Lecture notes on matroid intersection

Decision Problems. Observation: Many polynomial algorithms. Questions: Can we solve all problems in polynomial time? Answer: No, absolutely not.

CSE 21 Summer 2017 Homework 4

Flexible Coloring. Xiaozhou Li a, Atri Rudra b, Ram Swaminathan a. Abstract

11.1 Facility Location

Randomized Algorithms Week 4: Decision Problems

Elementary maths for GMT. Algorithm analysis Part II

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

1 Random Walks on Graphs

CPSC 536N: Randomized Algorithms Term 2. Lecture 10

Combinatorial Optimization

11 Linear Programming

Computational Geometry

Dual-fitting analysis of Greedy for Set Cover

Extra Practice Problems 2

CS161 - Final Exam Computer Science Department, Stanford University August 16, 2008

CMSC Introduction to Algorithms Spring 2012 Lecture 16

Transcription:

INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR Stamp / Signature of the Invigilator EXAMINATION ( End Semester ) SEMESTER ( Autumn ) Roll Number Section Name Subject Number C S 6 0 0 3 5 Subject Name Selected Topics in Algorithms Department / Center of the Student Additional sheets Important Instructions and Guidelines for Students 1. You must occupy your seat as per the Examination Schedule/Sitting Plan. 2. Do not keep mobile phones or any similar electronic gadgets with you even in the switched off mode. 3. Loose papers, class notes, books or any such materials must not be in your possession, even if they are irrelevant to the subject you are taking examination. 4. Data book, codes, graph papers, relevant standard tables/charts or any other materials are allowed only when instructed by the paper-setter. 5. Use of instrument box, pencil box and non-programmable calculator is allowed during the examination. However, exchange of these items or any other papers (including question papers) is not permitted. 6. Write on both sides of the answer script and do not tear off any page. Use last page(s) of the answer script for rough work. Report to the invigilator if the answer script has torn or distorted page(s). 7. It is your responsibility to ensure that you have signed the Attendance Sheet. Keep your Admit Card/Identity Card on the desk for checking by the invigilator. 8. You may leave the examination hall for wash room or for drinking water for a very short period. Record your absence from the Examination Hall in the register provided. Smoking and the consumption of any kind of beverages are strictly prohibited inside the Examination Hall. 9. Do not leave the Examination Hall without submitting your answer script to the invigilator. In any case, you are not allowed to take away the answer script with you. After the completion of the examination, do not leave the seat until the invigilators collect all the answer scripts. 10. During the examination, either inside or outside the Examination Hall, gathering information from any kind of sources or exchanging information with others or any such attempt will be treated as unfair means. Do not adopt unfair means and do not indulge in unseemly behavior. Violation of any of the above instructions may lead to severe punishment. Signature of the Student To be filled in by the examiner Question Number 1 2 3 4 5 6 7 8 9 10 Total Marks Obtained Marks obtained (in words) Signature of the Examiner Signature of the Scrutineer

CS60035/CS60086 Selected Topics in Algorithms, Autumn 2015 2016 End-Semester Test 26 November 2015 CSE-107, 2:00 5:00pm (+ ε) Maximum marks: 50 [ ] Write your answers in the question paper itself. Be brief and precise. Answer all questions. 1. You start with the array A = [1,2,...,n]. Your task is to return a random permutation of 1,2,...,n in A itself (that is, using only O(1) additional space). Propose an efficient algorithm to solve your problem. Argue that your algorithm is capable of generating any permutation of 1,2,...,n with equal probability. (6) Solution Assume that array indexing is zero-based. The following algorithm solves the given problem in Θ(n) time. for i = n 1,n 2,...,1 (in that order), repeat: Choose j U {0,1,...,i}. Swap A[i] with A[ j]. Start with any permutation π of 1,2,...,n stored in A, and run selection sort on A. Conversely, if you start with A = [1,2,...,n] and apply the swaps of the selection sort in the reverse sequence, you have π stored in A. Therefore the above algorithm is capable of generating any permutation of 1,2,...,n. Different permutations are generated by different choices of j in the loop, and the number of choices is exactly n(n 1) 2 = n!, that is, each permutation of 1,2,...,n is generated with probability 1/n!. Page 1 of 8

2. Let G = (V,E) be an undirected graph. A subset C V is called a vertex cover of G if each edge e E has at least one endpoint in C. We want to compute a vertex cover C of G with as few vertices as possible (the minimum vertex cover problem). For simplicity, assume that G is connected. We make a DFS traversal in G (starting from any arbitrary vertex). Denote by T the DFS tree produced by the traversal. Let C consist of all the non-leaf vertices in T. (a) Prove that C is a vertex cover of G. (6) Solution Let e = (u,v) E. Then e is either a tree edge (an edge of T ) or a back edge. In both the cases, at least one of u and v must be a non-leaf node in T, that is, e is covered by C. (b) Prove that this computation of C using the DFS traversal is a 2-approximation algorithm. (6) Solution Let t be the number of internal nodes in T, that is, C = t. Let M be any matching in G. The absence of cross edges in a DFS traversal implies that for each edge (u,v) M, either u or v is an internal node (or both are), that is, t 2 M. Moreover, OPT M, so t 2 OPT. Page 2 of 8

(c) Suppose that each vertex v V is associated with a positive weight w(v). The weighted vertex cover problem deals with the computation of a vertex cover C of G such that w(v) is as small as possible. Demonstrate that given any positive constant, there exist infinitely many graphs, in which any DFS traversal leads to an approximation factor >. (6) v C Solution Let G = T with V = n 3 be a star graph with the center having weight (n 1) + 1, and with each of the remaining vertices having weight 1. If we start the DFS traversal from the center, only the center is output as the vertex cover, and the weight of this cover is (n 1) +1. If we start the DFS traversal from a non-center vertex, then that vertex and the center constitute the vertex cover having a weight of (n 1) + 2. On the contrary, if we constructed a vertex cover consisting of all vertices other than the center, that vertex cover would have a weight of n 1. Page 3 of 8

3. Let A = [a 1,a 2,...,a n ] be an array of n positive integers with S = n i=1 a i. Let I 0 and I 1 be two subsets of I := {1,2,...,n} with I 0 I 1 = /0, and I 0 I 1 = I. Denote by A 0 and A 1 the following subcollections of A: A 0 = [a i ] i I0 and A 1 = [a i ] i I1. The partition problem deals with deciding the existence of I 0,I 1 satisfying: S 0 := a i = a i =: S 1 = S. Here, we deal with an optimization version of the problem. We want to find i I 0 i I 1 2 a partitioning of A, that is as balanced as possible, that is, we construct a partition I 0,I 1 such that S 0 S 1, and S 1 is as small as possible. { 0 if i I0, (a) For i {1,2,...,n}, let x i denote the variable such that x i = Formulate the balanced set 1 if i I 1. partitioning problem explained above as a 0, 1-LP (linear programming) problem. Also mention how this problem can be relaxed to an LP problem. (6) Solution We have S 1 = n i=1 0,1-LP formulation is: Minimize a i x i. Moreover, the condition S 0 S 1 is equivalent to the condition S 1 S 2. Therefore the n i=1 a i x i subject to n i=1 a i x i S 2, and x i {0,1} for all i = 1,2,...,n. The relaxed LP formulation replaces each non-linear constraint x i {0,1} by the two linear constraints: x i 0, and x i 1. Page 4 of 8

(b) We run an LP-solver to get an optimum solution (x1,x 2,...,x n) [0,1] n of the relaxed LP instance. We make a deterministic rounding as follows: Take i I 0 if xi < 1 2, and i I 1 if xi 1 2. Demonstrate by examples that this rounding may produce an infeasible solution (that is, S 1 < S 2 ), and a solution that can be as bad as possible (that is, S 1 = S). (6) Solution Infeasible solution: Take a 1 = a 2 = = a n 1 = 1, and a n = n. Here, the optimal solution to the discrete optimization problem is OPT = n. The relaxed LP instance has a solution with S 1 = n 2 1 achieved by x1 = x 2 = = x n 1 = 1, and x n = 2n 1. But then, rounding gives S 1 = {1,2,...,n 1} leading to S 1 = n 1 < S 2. Worst possible solution: Take a 1 = a 2 = = a n. The LP solver may output x 1 = x 2 = = x n = 1 2. Rounding gives I 1 = {1,2,...,n}, so S 1 = S. Page 5 of 8

(c) Let us do randomized rounding instead. For each i I, let us independently take x i = 1 with probability xi n. Denote X = i x i. Prove that Pr (X i=1a 32 ) ( n ) ( / n ) 2 OPT 4 a 2 i a i, where OPT is the optimal i=1 i=1 solution of the discrete optimization problem. (6) Solution We have E(X) = n i=1 a ixi, and S 2 E(X) OPT. But then, Pr( X 2 3 OPT) = Pr(X OPT OPT/2) Pr(X E(X) OPT/2) Pr( X E(X) OPT/2) 4Var(X)/OPT 2 16Var(X)/S 2. For each i, we have E(x i ) = xi and E(xi 2) = x i, so Var(x i) = E(xi 2) E(x i) 2 = xi (1 x i ) 4 1. Since x i are independent of one another, we have Var(X) = n i=1 Var(a ix i ) = n i=1 a2 i Var(x i) = n i=1 a2 i x i(1 x i ) 4 1 n i=1 a2 i. It follows that Pr ( X 3 2 OPT) 4( n i=1 a2 i )/S2 = 4( n i=1 a2 i )/( n i=1 a i) 2. ( ) Hint: For any a > 0, we have Chebyshev s inequality: Pr X E(X) a Var(X) a 2. Remarks: The probability bound of Part (c) is not always good. The trouble associated with infeasible solutions can be overcome by switching the roles of I 0,I 1 for both deterministic and randomized rounding. Page 6 of 8

4. Recall that in the bin packing problem, n items of weights a 1,a 2,...,a n (0,1], and bins of capacity one are given. Our task is to pack all of the given items in as few bins as possible. Consider the strategy that keeps on adding items to the most recently opened bins so long as possible. When further placements are not possible, the last bin is closed, a new bin is opened, and the next object is put in the newly opened bin. One calls this the next fit or the linear bin packing strategy. (a) Prove that the next fit strategy is a 2-approximation algorithm for the bin packing problem. (6) Solution Let m be the number of bins used by the next-fit algorithm. Suppose that a bin j is at most half full in this packing. But then, the next bin j + 1 must be more than half full. Otherwise, the opening of bin j + 1 is not justified. More importantly, the total weight of the two bins j and j + 1 must be larger than one. In fact, the sum of the weights of the items placed in bin j and the weight of the first item put in bin j + 1 must be larger than one. It follows that among the first m 1 bins, at most m/2 bins can be at most half full, each followed by a bin more than half full that makes the average weight of the two consecutive bins larger than half. Therefore the total weight packed is n i=1 a i > m/2. Moreover, OPT n i=1 a i, that is, m < 2 OPT. Since m and OPT are integers, we have m 2 OPT 1. Page 7 of 8

(b) Prove that the approximation ratio 2 is tight, that is, given any ε > 0, there exists an input instance for which the algorithm achieves an approximation ratio > 2 ε. (6) Solution Given ε > 0, choose a positive integer k > ε 1. Let there be k 1 items of weight one, and k items of weight 1 k. Suppose that these items occur in the sequence 1 k,1, 1 k,1,..., 1 k,1, 1 k. We have OPT = k (place all the objects of weight 1 k in one bin, and each item of weight one in a single bin; this strategy must be optimal since all the k bins are filled to the capacity). The next-fit algorithm uses m = 2k 1 bins. The approximation ratio is therefore 2 1 k > 2 ε. Page 8 of 8

For leftover answers and rough work

For leftover answers and rough work If you optimize everything, you will always be unhappy. Donald E. Knuth