ML Detection via SDP Relaxation

Size: px
Start display at page:

Download "ML Detection via SDP Relaxation"

Transcription

1 ML Detection via SDP Relaxation Junxiao Song and Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) ELEC Convex Optimization Fall , HKUST, Hong Kong

2 Outline of Lecture 1 BPSK Signal Detection Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions 2 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation 3 Dual Barrier Method for SDP J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

3 Outline of Lecture Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions 1 BPSK Signal Detection Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions 2 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation 3 Dual Barrier Method for SDP J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

4 BPSK Signal Detection Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Given the linear observation model y = Hs + w, where y R m is the received signal, H R m n is the known channel matrix, s {±1} n is the vector of transmitted BPSK symbols, and w is the Gaussian noise vector w N ( 0, σ 2 I ). The maximum likelihood (ML) detection problem is minimize y Hs 2 s 2 subject to s {±1} n. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

5 How Hard the Problem is? Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions It is a nonconvex problem, due to the binary constraint set. We could solve it by evaluating all possible combinations; i.e., brute-force search. The complexity of a brute-force search is O(2 n ), not okay at all for large n! NP-hard in general no polynomial-time algorithm exists. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

6 Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions What can we do? One possibility is to relax the problem. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

7 Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions What can we do? One possibility is to relax the problem. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

8 Unconstrained and Box Relaxation Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Unconstrained relaxation: minimize s R n y Hs 2 2 The result is zero forcing (ZF) and the problem has a closed-form solution s = (H T H) 1 H T y. Box relaxation: minimize y Hs 2 s 2 subject to 1 s i 1, i = 1,..., n. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

9 SDP Relaxation I Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Since y Hs 2 2 = y T y + s T H T Hs 2y T Hs = x T Lx, where [ H L = T H H T y y T H y T y ] and x = [ s 1 ]. The problem can be rewritten in homogeneous form as minimize x T Lx x subject to x {±1} n+1 x n+1 = 1. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

10 SDP Relaxation II Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Note that the last constraint x n+1 = 1 is unnecessary, since if x is a solution, so is x. Thus, we can simply remove the constraint. Defining X = xx T, the problem is equivalent to minimize X,x Tr(LX) subject to diag(x) = 1 n+1 X = xx T. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

11 SDP Relaxation III Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions The constraint X = xx T is equivalent to X 0 and rank(x) = 1. The key idea in SDP relaxation is to relax the problem by removing the rank constraint, and the problem becomes which is an SDP. minimize Tr(LX) X subject to diag(x) = 1 n+1 X 0, J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

12 An Alternative Way to Derive SDR I Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Recalling the original problem (nonhomogeneous form) minimize s T H T Hs 2y T Hs + y T y s subject to s {±1} n. By letting S = ss T, we get minimize S,s subject to Tr(H T HS) 2y T Hs + y T y diag(s) = 1 n S = ss T. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

13 An Alternative Way to Derive SDR II Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Relaxing S = ss T to S ss T, we can derive an SDR minimize S,s subject to Tr(H T HS) 2y T Hs + y T y diag(s) = 1 n S ss T. This nonhomogeneous SDR is equivalent to the previously derived SDR, by Schur complement: [ ] S ss T S s X = s T 0 1 J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

14 Schur Complement Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Consider a matrix X S n partitioned as [ ] A B X = B T, C where A S k. If A is nonsingular, the matrix S = C B T A 1 B is called the Schur complement of A in X. Important characterizations: X 0 if and only if A 0 and S 0. If A 0, then X 0 if and only if S 0. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

15 SDR as the Dual of the Dual I Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Recalling the original problem in homogeneous form The Lagrangian is minimize x T Lx x subject to x {±1} n+1 n+1 L (x, λ) = x T Lx + λ i (x 2 i 1) The dual problem is i=1 = λ T 1 + x T (L + Diag(λ))x maximize λ T 1 λ subject to L + Diag(λ) 0. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

16 SDR as the Dual of the Dual II Recalling the SDP relaxation The dual function is Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions minimize Tr(LX) X subject to diag(x) = 1 n+1 X 0, g(z, λ) = inf X Tr (LX) Tr (ZX) + λt (diag(x) 1) = inf Tr ((L Z + Diag(λ))X) X λt 1 { λ T 1 L + Diag(λ) = Z 0 = otherwise. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

17 SDR as the Dual of the Dual III Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Thus, the dual problem of the SDR is maximize λ T 1 λ subject to L + Diag(λ) 0, which is exactly the dual problem of the original problem. Since strong duality holds for the SDP, the SDP relaxation can be considered as the dual of the dual of the original problem. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

18 Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Which relaxation is better? J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

19 Comparison via Simulation Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Symbol Error Rate ZF Box Relaxation SDP Relaxation SNR, in db Figure: SER performance using quantization, n = m = 8, BPSK. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

20 Comparison of Various Relaxations Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions From the figure, we can see that the performance of the unconstrained relaxation (ZF) is the worst. This is easy to understand, since the other two relaxations are much tighter than the unconstrained relaxation. From the figure, we can also see that the SDP relaxation performs better than the box relaxation. Is the SDP relaxation tighter than the box relaxation? J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

21 Comparison of Various Relaxations Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions From the figure, we can see that the performance of the unconstrained relaxation (ZF) is the worst. This is easy to understand, since the other two relaxations are much tighter than the unconstrained relaxation. From the figure, we can also see that the SDP relaxation performs better than the box relaxation. Is the SDP relaxation tighter than the box relaxation? J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

22 Box Relaxation Vs SDP Relaxation I Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Recalling the box relaxation and the SDP relaxation minimize y Hs 2 s subject to s 2 i 1, i = 1,..., n. minimize S,s subject to Tr(H T HS) 2y T Hs + y T y diag(s) = 1 n S ss T. It seems hard to compare these two relaxations from the primal perspective. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

23 Box Relaxation Vs SDP Relaxation II Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions But we know that the optimal value of the SDP relaxation is equal to that of the dual of the original problem, thus p SDR = max λ min s y Hs 2 + For the box relaxation, by strong duality p Box = max min y Hs 2 + λ 0 s n λ i (s 2 i 1) i=1 n λ i (s 2 i 1) The feasible set of λ in SDP relaxation contains that in box relaxation, i.e., the box relaxation may be seen as further relaxation of the SDP relaxation, thus p Box p SDR p ML. i=1 J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

24 Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions The original problem requires the solution to be binary. If the optimal solutions of the unconstrained relaxation and box relaxation are binary, or the optimal solution of the SDP relaxation is rank 1, then clearly we have solved the original problem. But usually the optimal solution of the relaxed problems will not be binary. How can we reconstruct binary solutions? J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

25 Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions The original problem requires the solution to be binary. If the optimal solutions of the unconstrained relaxation and box relaxation are binary, or the optimal solution of the SDP relaxation is rank 1, then clearly we have solved the original problem. But usually the optimal solution of the relaxed problems will not be binary. How can we reconstruct binary solutions? J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

26 Simple Quantization Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions For unconstrained relaxation and box relaxation, a natural approach is to quantize the solution by looking at the sign, i.e., ŝ = sign(s relax ). For SDP relaxation, based on the fact that, [ when the ] solution ˆX is rank 1, it will have the structure ˆX ss T s = s T, one 1 simple approach is to quantize the last column of ˆX, i.e., ŝ = sign( ˆX 1:n,n+1 ). This is the quantization method we have used in the previous figure. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

27 Eigenvalue Decomposition Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions There are better approaches for SDP relaxation, since we have the whole matrix ˆX. Suppose λ is the largest eigenvalue of ˆX and u is the corresponding eigenvector. Like before, based on the fact that, when ˆX is rank 1, then [ ] s [ ˆX = s T 1 ] = λuu T, 1 i.e., u is a scaled version of [ s 1 ( ) u1:n ŝ = sign. u n+1 ]. Thus, we can take J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

28 Randomization BPSK Signal Detection Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions The idea of randomization is to generate random points using ˆX as the covariance matrix, and then quantize and keep the best of the points. A convenient way to implement this is to factorize ˆX as ˆX = VV T, multiply V by random vectors r (i) N (0, I) and then quantize ( ) ˆx (i) Vr (i) = sign V n+1,: r (i) i = 1,..., N. Then just keep the point ˆx with minimum objective value. Randomization usually can provides better performance compared with the previous two methods. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

29 Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Approximation Bound of Randomization When we generate candidate solutions using the randomization approach, the expected value of the objective can be computed explicitly: E(ˆx T Lˆx) = Tr(LE(ˆxˆx T )) = 2 Tr(Larcsin( ˆX)). π We are guaranteed to reach this expected value after sampling a few points ˆx, hence we know that the optimal value p ML of the ML detection problem is bounded by Tr(L ˆX) p ML ˆx T Lˆx 2 Tr(Larcsin( ˆX)). π J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

30 SER Performance Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions Symbol Error Rate ZF Box Relaxation SDP Last Column SDP Eigenvector SDP Randomization SNR, in db Figure: SER performance, n = m = 8, BPSK. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

31 Outline of Lecture QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation 1 BPSK Signal Detection Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions 2 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation 3 Dual Barrier Method for SDP J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

32 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation So far we have considered the SDP relaxation for the case with BPSK symbols and real-valued channels. Can we handle more general complex-valued channels and constellations, such as QPSK, M-PSK and 16-QAM? Yes! There are different papers dealing with different constellations. BPSK and QPSK [Tan and Rasmussen, 2001] MPSK [Ma et al., 2004] 16-QAM [Wiesel et al., 2005], [Sidiropoulos and Luo, 2006] J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

33 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation So far we have considered the SDP relaxation for the case with BPSK symbols and real-valued channels. Can we handle more general complex-valued channels and constellations, such as QPSK, M-PSK and 16-QAM? Yes! There are different papers dealing with different constellations. BPSK and QPSK [Tan and Rasmussen, 2001] MPSK [Ma et al., 2004] 16-QAM [Wiesel et al., 2005], [Sidiropoulos and Luo, 2006] J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

34 General Channel Model QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation From now on, we will consider the following complex-valued MIMO model ỹ = H s + w, where ỹ C m is the received signal, H C m n is the channel matrix, s A n is the vector of transmitted symbols, with A being the constellation set, and w is the complex Gaussian noise vector w CN ( 0, σ 2 I ). Constellations: QPSK: A = {±1 ± j} M-PSK: A = {s = e π M (2k 1)j k = 1,..., M} 16-QAM: A = {s = s 1 + js 2 s 1, s 2 {±1, ±3}} J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

35 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation ML Detection with QPSK Constellation The signal model is where s i {±1 ± j}. ỹ = H s + w, The ML detection problem in the QPSK case: minimize ỹ H s 2 s subject to s {±1 ± j} n. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

36 SDP Relaxation for QPSK QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation The complex-valued problem can be reformulated as a real-valued one if we define [ ] [ ] Re{ỹ} Re{ H} Im{ H} y = ; H = Im{ỹ} Im{ H} Re{ H} ; s = [ Re{ s} Im{ s} ] [ Re{ w} ; w = Im{ w} The problem reduces to a BPSK one with twice the dimension: y = Hs + w, s {±1} 2n, and thus can be solved via SDP relaxation. ]. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

37 M-PSK Constellations QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation M-PSK: A = {s = e π M (2k 1)j k = 1,..., M} The complex constellation points are equally spaced on the unit-circle. For example, the 8-PSK constellation is as follows: j Im 1 Re 8-PSK J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

38 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation ML Detection with M-PSK Constellations The signal model is ỹ = H s + w, where s i {s = e π M (2k 1)j k = 1,..., M}. The ML detection problem in the M-PSK case: minimize s ỹ H s 2 subject to s {s = e π M (2k 1)j k = 1,..., M} n. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

39 SDP Relaxation for M-PSK I QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation Idea: relax the constellation constraints to s i = 1, i.e., the whole unit-circle, and then apply SDP relaxation. Following the idea, we first relax the problem to be minimize ỹ H s 2 s subject to s i = 1, i = 1,..., n. By letting S = s s H, we get minimize S, s subject to Tr( H H H S) 2Re{ỹ H H s} + ỹhỹ diag( S) = 1 n S = s s H. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

40 SDP Relaxation for M-PSK II QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation As before, by further relaxing S = s s H to S s s H, we can derive a complex-valued SDR minimize S, s subject to Tr( H H H S) 2Re{ỹ H H s} + ỹhỹ diag( S) = 1 n S s s H. By Schur complement, we can rewrite the complex-valued SDR in homogeneous form: minimize Tr( LX) X C n+1 n+1 subject to diag(x) = 1 n+1 X 0, [ where L HH H = HHỹ ỹ H H ỹhỹ ]. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

41 16-QAM Constellation QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation 16-QAM: A = {s = s 1 + js 2 s 1, s 2 {±1, ±3}} J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

42 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation ML Detection with 16-QAM Constellation The signal model is ỹ = H s + w, where s i {s = s 1 + js 2 s 1, s 2 {±1, ±3}}. The ML detection problem in the 16-QAM case: minimize ỹ H s 2 s subject to s {s = s 1 + js 2 s 1, s 2 {±1, ±3}} n. As in the QPSK case, we can rewrite the problem as a real-valued one minimize y Hs 2 s subject to s {±1, ±3} 2n. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

43 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation SDP Relaxation with Polynomial Constraints I The constellation constraints can be written in a polynomial form (s 2 i 1)(s2 i 9) = 0 (i = 1,, 2n) to obtain the following equivalent formulation: minimize y Hs 2 s,t subject to s 2 i = t i, i = 1,, 2n t 2 i 10t i + 9 = 0, i = 1,, 2n. Since there are quadratic equality constraints, the problem is nonconvex. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

44 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation SDP Relaxation with Polynomial Constraints II Defining a new variable s X = t [ s T t T 1 ] = 1 ss T st T s ts T tt T t s T t T 1 = then the quadratic equality constraints become diag(x 1,1 ) X 2,3 = 0 diag(x 2,2 ) 10X 2,3 + 9 = 0 X 1,1 X 1,2 X 1,3 X 2,1 X 2,2 X 2,3 X 3,1 X 3,2 X 3,3 and the objective function can be written as H T H 0 H T y y Hs 2 = Tr y T H 0 y T y X. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

45 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation SDP Relaxation with Polynomial Constraints III The problem can be rewritten as minimize X subject to Tr H T H 0 H T y y T H 0 y T y X diag(x 1,1 ) X 2,3 = 0 diag(x 2,2 ) 10X 2,3 + 9 = 0 X 0 X 3,3 = 1 rank(x) = 1 Finally, the problem can be relaxed to an SDP by dropping the rank-one constraint. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

46 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation SDP Relaxation with Bound Constraints I Still consider the equivalent real-valued problem By letting S = ss T, we get minimize y Hs 2 s subject to s {±1, ±3} 2n. minimize S,s Tr(H T HS) 2y T Hs + y T y subject to S ii {1, 9}, i = 1,..., 2n S = ss T. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

47 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation SDP Relaxation with Bound Constraints II Relaxing S = ss T to S ss T is not enough to yield a convex relaxation. We also relaxes {1, 9} to the interval [1, 9], leading to minimize S,s Tr(H T HS) 2y T Hs + y T y subject to 1 S ii 9, i = 1,..., 2n S ss T. Rather unexpectedly, it is shown in [Ma et al., 2009] that the two SDP relaxations are equivalent. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

48 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation So far, we have seen different specialized SDP relaxations for different constellations. Can we come up with a universal SDP relaxation to deal with all the constellations? Yes! Next we are going to talk about a universal binary SDP relaxation [Fan et al., 2013], which can handle any constellations. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

49 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation So far, we have seen different specialized SDP relaxations for different constellations. Can we come up with a universal SDP relaxation to deal with all the constellations? Yes! Next we are going to talk about a universal binary SDP relaxation [Fan et al., 2013], which can handle any constellations. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

50 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation Binarization of Arbitrary Constellations I The starting point of the universal binary SDR is the existence of a binary space covering the original non-binary signal space. For any constellation A C of size A = M, there always exists a covering binary space B defined by a vector α C q, such that A B = {s s = α H b, b {±1} q }, with q satisfying log 2 (M) q M + 1. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

51 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation Binarization of Arbitrary Constellations II For example, if A = {s 1, s 2,..., s M }, we can construct a covering binary space B by setting [ ] H 1 α = 2 s 1, 1 2 s 2,, 1 2 s M, 1 s i, 2 which is not very desirable as it achieves the most expensive binary expansion with q = M + 1. But the binary mapping (i.e., the choice of α) is not unique for a given constellation A, and for most constellations we can have tighter binary mappings with smaller q. For example, the 16-QAM constellation, we can construct B by using α = [2, 2j, 1, j] H and thus q = log 2 16 = 4. i J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

52 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation Binarization of Arbitrary Constellations III In the case of the 16-QAM constellation, it just so happens that B = A. In other cases, however, like the 12-QAM constellation shown below, B will contain more points than A, i.e., B A and we need to add some constraints to exclude all the undesired points. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

53 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation Binarization of Arbitrary Constellations IV Lemma In the following lemma, we derive linear constraints to remove all the undesired points. For any constellation A C of size A = M, there always exists a vector α C q (with q satisfying log 2 (M) q M + 1) and a matrix D such that A = {s s = α H b, b T D (q 2)1 T, b {±1} q }. One possible choice for D is D = [d 1, d 2,..., d l ], where d 1, d 2,..., d l are the binary vectors that generate the difference B A, i.e., α H d i B A, i = 1,..., l. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

54 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation Table: Binarization for different constellations Constellation A α D BPSK A = { 1, 1} 1 QPSK A = {±1 ± j} [1, j] H 16-QAM A = {s s = s 1 + js 2, s 1 S, s 2 S} [2, 2j, 1, j] H S = { 3, 1, 1, 3} 64-QAM A = {s s = s 1 + js 2, s 1 S, s 2 S} [4, 4j, 2, 2j, 1, j] H S = { 7, 5, 3, 1, 1, 3, 5, 7} 256-QAM A = {s s = s 1 + js 2, s 1 S, s 2 S} [8, 8j, 4, 4j, 2, 2j, 1, j] H S = {±(2i + 1), i = 0, 1,..., 7} 12-QAM A = {s s A 16 QAM, s 2 < 18} [2, 2j, 1, j] H 32-QAM A = {s s A 64 QAM, s 2 < 50} [4, 4j, 2, 2j, 1, j] H 8-PSK A = {s s = e j (2i+1)π 8, i = 0, 1,..., 7} [a, b, aj, bj] H (a = 2 2 cos( π 8 ) b = 2 2 sin( π 8 )) J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

55 Universal Binary SDP Relaxation I QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation With the binary representation of arbitrary constellations in the previous lemma, the signal space A n can be expressed as: A n = { s s = α H b, bt D (q 2)1 T, b {±1} qn }, where α H = α H I n n, b = [b T 1,..., bt q ] T (with the n-dimensional vector b q containing the q-th bit of the n transmitted symbols) and D = D I n n. The implication is that any n-dimensional signal space A n can be expressed by all qn-dimensional binary vectors b, that satisfy some linear inequalities constraints b T D (q 2)1 T. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

56 Universal Binary SDP Relaxation II QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation Recalling the general ML detection problem minimize ỹ H s 2 s subject to s A n. By applying the binary representation of A n, the problem becomes minimize ỹ H α H b 2 b subject to b T D (q 2)1 T b {±1} qn, which is now a qn-dimensional binary ML detection problem. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

57 Universal Binary SDP Relaxation III QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation The problem reduces to the BPSK case, but with additional linear inequality constraints. By letting x = [ b T 1] T and X = xx T, the problem becomes minimize X,x subject to Tr(LX) diag(x) = 1 qn+1 X = xx T X qn+1,1:qn D (q 2)1 T where [ H L = H H H H ỹ ỹ H H ỹ H ỹ ], with H = H α H. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

58 Universal Binary SDP Relaxation IV QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation By relaxing X = xx T to X 0, finally we get the SDP relaxation minimize X subject to Tr(LX) diag(x) = 1 qn+1 X 0 X qn+1,1:qn D (q 2)1 T. To sum up, given a constellation, we first find α and D, then let α H = α H I n n, D = D I n n and construct L as in the previous page. With D and L, we then solve the above SDP relaxation. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

59 Outline of Lecture Dual Barrier Method for SDP 1 BPSK Signal Detection Problem Statement Convex Relaxations Comparison of Various Relaxations Reconstruct Binary Solutions 2 QPSK Constellation M-PSK Constellations 16-QAM Constellation Universal Binary SDP Relaxation 3 Dual Barrier Method for SDP J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

60 Dual Barrier Method for SDP Solving the SDP Relaxation Having the SDR formulation, a general purpose software, such as CVX, can be used to solve the SDP conveniently. But to improve the computational efficiency, we can write our own interior point method (IPM) to explicitly exploit the problem structure. For ease of illustration, we take the SDP relaxation in the BPSK case as an example, to show the derivation of a specialized (and fast) IPM. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

61 A First Look at the Problem Dual Barrier Method for SDP Recalling the problem formulation minimize Tr(LX) X subject to diag(x) = 1 n+1 X 0. We know that in the barrier method we need to compute the gradient and Hessian. But in the above problem, the variable is an (n + 1) (n + 1) matrix, and the Hessian will be of size (n + 1) 2 (n + 1) 2, which is hard to handle. Thus, instead of directly dealing with the primal problem, we resort to the dual problem. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

62 Dual Barrier Method for SDP I Dual Barrier Method for SDP The dual problem is maximize 1 T v v subject to L + Diag(v) 0. The logarithmic barrier of L + Diag(v) 0 is logdet (L + Diag(v)). The dual barrier method will solve the following problem based on Newton s method: minimize v f (v) = t ( 1 T v ) logdet (L + Diag(v)), where t is the barrier parameter. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

63 Dual Barrier Method for SDP II Dual Barrier Method for SDP Denoting v (t) the optimal solution of the problem for a given t, the gradient will be zero ( f(v (t)) = t1 diag (L + Diag (v (t))) 1) = 0. Defining X (t) = 1 t (L + Diag (v (t))) 1, then diag (X (t)) = 1 n+1, so it s a feasible point for the primal. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

64 Dual Barrier Method for SDP Dual Barrier Method for SDP III The duality gap is Tr(LX (t)) ( v (t) T 1) = Tr(LX (t)) + v (t) T diag (X (t)) = Tr(LX (t)) + Tr(Diag (v (t)) X (t)) = Tr((L + Diag (v (t)))x (t)) = 1 t Tr(I n+1) = n + 1 t which means Tr(LX (t)) is not far away from the desired optimal value p of the original problem by more than n+1 t. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

65 Dual Barrier Method for SDP IV Dual Barrier Method for SDP The algorithm for solving the SDP is summarized as follows: Algorithm: dual barrier method for SDP Input: L, µ > 1, t > 0, tolerance ɛ > 0 Repeat 1. Centering step. Compute v (t) by solving minimize t ( 1 T v ) logdet (L + Diag(v)) v using Newton s method. 2. Stopping criterion. Quit if (n + 1) /t < ɛ 3. Increase t. t = µt. Output: X = (1/t) (L + diag (v (t))) 1 J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

66 Computational Complexity Dual Barrier Method for SDP 10 0 SDPT3 SeDuMi Tailored dual barrier method Average running time (s) Problem size, n (n = m) Figure: Average running time vs problem size. J. Song and D. Palomar (HKUST) SDP Relaxation ELEC / 63

67 References Fan, X., Song, J., Palomar, D. P., and Au, O. C. (Nov. 2013). Universal Binary Semidefinite Relaxation for ML Signal Detection. IEEE Trans. on Communications, 61(11): Ma, W. K., Ching, P. C., and Ding, Z. (Oct. 2004). Semidefinite relaxation based multiuser detection for M-ary PSK multiuser systems. IEEE Transactions on Signal Processing, 52(10): Ma, W. K., Su, C. C., Jaldén, J., Chang, T. H., and Chi, C. Y. (Jun. 2009). The equivalence of semidefinite relaxation MIMO detectors for higher-order QAM. IEEE Journal of Selected Topics in Signal Processing, 3(6): Sidiropoulos, N. D. and Luo, Z. Q. (Sept. 2006). A semidefinite relaxation approach to MIMO detection for high-order QAM constellations. IEEE Signal Processing Letters, 13(9): Tan, P. H. and Rasmussen, L. K. (Aug. 2001). The application of semidefinite programming for detection in CDMA. IEEE Journal on Selected Areas in Communications, 19(8): Wiesel, A., Eldar, Y. C., and Shamai, S. (May 2005). Semidefinite Relaxation for Detection of 16-QAM Signaling in MIMO Channels. IEEE Signal Processing Letters, 12(9):

68 Thanks For more information visit:

Programming, numerics and optimization

Programming, numerics and optimization Programming, numerics and optimization Lecture C-4: Constrained optimization Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428 June

More information

Introduction to Convex Optimization. Prof. Daniel P. Palomar

Introduction to Convex Optimization. Prof. Daniel P. Palomar Introduction to Convex Optimization Prof. Daniel P. Palomar The Hong Kong University of Science and Technology (HKUST) MAFS6010R- Portfolio Optimization with R MSc in Financial Mathematics Fall 2018-19,

More information

Exploiting Low-Rank Structure in Semidenite Programming by Approximate Operator Splitting

Exploiting Low-Rank Structure in Semidenite Programming by Approximate Operator Splitting Exploiting Low-Rank Structure in Semidenite Programming by Approximate Operator Splitting Mario Souto, Joaquim D. Garcia and Álvaro Veiga June 26, 2018 Outline Introduction Algorithm Exploiting low-rank

More information

Lec13p1, ORF363/COS323

Lec13p1, ORF363/COS323 Lec13 Page 1 Lec13p1, ORF363/COS323 This lecture: Semidefinite programming (SDP) Definition and basic properties Review of positive semidefinite matrices SDP duality SDP relaxations for nonconvex optimization

More information

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction

PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING. 1. Introduction PRIMAL-DUAL INTERIOR POINT METHOD FOR LINEAR PROGRAMMING KELLER VANDEBOGERT AND CHARLES LANNING 1. Introduction Interior point methods are, put simply, a technique of optimization where, given a problem

More information

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 2. Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 2 Convex Optimization Shiqian Ma, MAT-258A: Numerical Optimization 2 2.1. Convex Optimization General optimization problem: min f 0 (x) s.t., f i

More information

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding

Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding B. O Donoghue E. Chu N. Parikh S. Boyd Convex Optimization and Beyond, Edinburgh, 11/6/2104 1 Outline Cone programming Homogeneous

More information

Convexity Theory and Gradient Methods

Convexity Theory and Gradient Methods Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality

More information

Convex Optimization and Machine Learning

Convex Optimization and Machine Learning Convex Optimization and Machine Learning Mengliu Zhao Machine Learning Reading Group School of Computing Science Simon Fraser University March 12, 2014 Mengliu Zhao SFU-MLRG March 12, 2014 1 / 25 Introduction

More information

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints

ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints ISM206 Lecture, April 26, 2005 Optimization of Nonlinear Objectives, with Non-Linear Constraints Instructor: Kevin Ross Scribe: Pritam Roy May 0, 2005 Outline of topics for the lecture We will discuss

More information

Introduction to Optimization

Introduction to Optimization Introduction to Optimization Second Order Optimization Methods Marc Toussaint U Stuttgart Planned Outline Gradient-based optimization (1st order methods) plain grad., steepest descent, conjugate grad.,

More information

Convex Optimization MLSS 2015

Convex Optimization MLSS 2015 Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :

More information

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer

David G. Luenberger Yinyu Ye. Linear and Nonlinear. Programming. Fourth Edition. ö Springer David G. Luenberger Yinyu Ye Linear and Nonlinear Programming Fourth Edition ö Springer Contents 1 Introduction 1 1.1 Optimization 1 1.2 Types of Problems 2 1.3 Size of Problems 5 1.4 Iterative Algorithms

More information

4 Integer Linear Programming (ILP)

4 Integer Linear Programming (ILP) TDA6/DIT37 DISCRETE OPTIMIZATION 17 PERIOD 3 WEEK III 4 Integer Linear Programg (ILP) 14 An integer linear program, ILP for short, has the same form as a linear program (LP). The only difference is that

More information

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited.

Contents. I Basics 1. Copyright by SIAM. Unauthorized reproduction of this article is prohibited. page v Preface xiii I Basics 1 1 Optimization Models 3 1.1 Introduction... 3 1.2 Optimization: An Informal Introduction... 4 1.3 Linear Equations... 7 1.4 Linear Optimization... 10 Exercises... 12 1.5

More information

Linear methods for supervised learning

Linear methods for supervised learning Linear methods for supervised learning LDA Logistic regression Naïve Bayes PLA Maximum margin hyperplanes Soft-margin hyperplanes Least squares resgression Ridge regression Nonlinear feature maps Sometimes

More information

Lecture 7: Support Vector Machine

Lecture 7: Support Vector Machine Lecture 7: Support Vector Machine Hien Van Nguyen University of Houston 9/28/2017 Separating hyperplane Red and green dots can be separated by a separating hyperplane Two classes are separable, i.e., each

More information

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach

LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION. 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach LECTURE 13: SOLUTION METHODS FOR CONSTRAINED OPTIMIZATION 1. Primal approach 2. Penalty and barrier methods 3. Dual approach 4. Primal-dual approach Basic approaches I. Primal Approach - Feasible Direction

More information

Mathematical and Algorithmic Foundations Linear Programming and Matchings

Mathematical and Algorithmic Foundations Linear Programming and Matchings Adavnced Algorithms Lectures Mathematical and Algorithmic Foundations Linear Programming and Matchings Paul G. Spirakis Department of Computer Science University of Patras and Liverpool Paul G. Spirakis

More information

Solution Methods Numerical Algorithms

Solution Methods Numerical Algorithms Solution Methods Numerical Algorithms Evelien van der Hurk DTU Managment Engineering Class Exercises From Last Time 2 DTU Management Engineering 42111: Static and Dynamic Optimization (6) 09/10/2017 Class

More information

Lecture 2 September 3

Lecture 2 September 3 EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give

More information

Applied Lagrange Duality for Constrained Optimization

Applied Lagrange Duality for Constrained Optimization Applied Lagrange Duality for Constrained Optimization Robert M. Freund February 10, 2004 c 2004 Massachusetts Institute of Technology. 1 1 Overview The Practical Importance of Duality Review of Convexity

More information

Characterizing Improving Directions Unconstrained Optimization

Characterizing Improving Directions Unconstrained Optimization Final Review IE417 In the Beginning... In the beginning, Weierstrass's theorem said that a continuous function achieves a minimum on a compact set. Using this, we showed that for a convex set S and y not

More information

A primal-dual framework for mixtures of regularizers

A primal-dual framework for mixtures of regularizers A primal-dual framework for mixtures of regularizers Baran Gözcü baran.goezcue@epfl.ch Laboratory for Information and Inference Systems (LIONS) École Polytechnique Fédérale de Lausanne (EPFL) Switzerland

More information

Linear Programming Duality and Algorithms

Linear Programming Duality and Algorithms COMPSCI 330: Design and Analysis of Algorithms 4/5/2016 and 4/7/2016 Linear Programming Duality and Algorithms Lecturer: Debmalya Panigrahi Scribe: Tianqi Song 1 Overview In this lecture, we will cover

More information

16.410/413 Principles of Autonomy and Decision Making

16.410/413 Principles of Autonomy and Decision Making 16.410/413 Principles of Autonomy and Decision Making Lecture 17: The Simplex Method Emilio Frazzoli Aeronautics and Astronautics Massachusetts Institute of Technology November 10, 2010 Frazzoli (MIT)

More information

Lecture 15: Log Barrier Method

Lecture 15: Log Barrier Method 10-725/36-725: Convex Optimization Spring 2015 Lecturer: Ryan Tibshirani Lecture 15: Log Barrier Method Scribes: Pradeep Dasigi, Mohammad Gowayyed Note: LaTeX template courtesy of UC Berkeley EECS dept.

More information

Detecting Infeasibility in Infeasible-Interior-Point. Methods for Optimization

Detecting Infeasibility in Infeasible-Interior-Point. Methods for Optimization FOCM 02 Infeasible Interior Point Methods 1 Detecting Infeasibility in Infeasible-Interior-Point Methods for Optimization Slide 1 Michael J. Todd, School of Operations Research and Industrial Engineering,

More information

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748

COT 6936: Topics in Algorithms! Giri Narasimhan. ECS 254A / EC 2443; Phone: x3748 COT 6936: Topics in Algorithms! Giri Narasimhan ECS 254A / EC 2443; Phone: x3748 giri@cs.fiu.edu http://www.cs.fiu.edu/~giri/teach/cot6936_s12.html https://moodle.cis.fiu.edu/v2.1/course/view.php?id=174

More information

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems

A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems A Comparison of Mixed-Integer Programming Models for Non-Convex Piecewise Linear Cost Minimization Problems Keely L. Croxton Fisher College of Business The Ohio State University Bernard Gendron Département

More information

A primal-dual Dikin affine scaling method for symmetric conic optimization

A primal-dual Dikin affine scaling method for symmetric conic optimization A primal-dual Dikin affine scaling method for symmetric conic optimization Ali Mohammad-Nezhad Tamás Terlaky Department of Industrial and Systems Engineering Lehigh University July 15, 2015 A primal-dual

More information

Lecture 25 Nonlinear Programming. November 9, 2009

Lecture 25 Nonlinear Programming. November 9, 2009 Nonlinear Programming November 9, 2009 Outline Nonlinear Programming Another example of NLP problem What makes these problems complex Scalar Function Unconstrained Problem Local and global optima: definition,

More information

Lecture 12: convergence. Derivative (one variable)

Lecture 12: convergence. Derivative (one variable) Lecture 12: convergence More about multivariable calculus Descent methods Backtracking line search More about convexity (first and second order) Newton step Example 1: linear programming (one var., one

More information

Machine Learning for Signal Processing Lecture 4: Optimization

Machine Learning for Signal Processing Lecture 4: Optimization Machine Learning for Signal Processing Lecture 4: Optimization 13 Sep 2015 Instructor: Bhiksha Raj (slides largely by Najim Dehak, JHU) 11-755/18-797 1 Index 1. The problem of optimization 2. Direct optimization

More information

6 Randomized rounding of semidefinite programs

6 Randomized rounding of semidefinite programs 6 Randomized rounding of semidefinite programs We now turn to a new tool which gives substantially improved performance guarantees for some problems We now show how nonlinear programming relaxations can

More information

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c

ORIE 6300 Mathematical Programming I November 13, Lecture 23. max b T y. x 0 s 0. s.t. A T y + s = c ORIE 63 Mathematical Programming I November 13, 214 Lecturer: David P. Williamson Lecture 23 Scribe: Mukadder Sevi Baltaoglu 1 Interior Point Methods Consider the standard primal and dual linear programs:

More information

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics

1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics 1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!

More information

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs

Introduction to Mathematical Programming IE406. Lecture 20. Dr. Ted Ralphs Introduction to Mathematical Programming IE406 Lecture 20 Dr. Ted Ralphs IE406 Lecture 20 1 Reading for This Lecture Bertsimas Sections 10.1, 11.4 IE406 Lecture 20 2 Integer Linear Programming An integer

More information

arxiv: v3 [math.oc] 3 Nov 2016

arxiv: v3 [math.oc] 3 Nov 2016 BILEVEL POLYNOMIAL PROGRAMS AND SEMIDEFINITE RELAXATION METHODS arxiv:1508.06985v3 [math.oc] 3 Nov 2016 JIAWANG NIE, LI WANG, AND JANE J. YE Abstract. A bilevel program is an optimization problem whose

More information

Week 5. Convex Optimization

Week 5. Convex Optimization Week 5. Convex Optimization Lecturer: Prof. Santosh Vempala Scribe: Xin Wang, Zihao Li Feb. 9 and, 206 Week 5. Convex Optimization. The convex optimization formulation A general optimization problem is

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 36 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 36 CS 473: Algorithms, Spring 2018 LP Duality Lecture 20 April 3, 2018 Some of the

More information

Lecture 18: March 23

Lecture 18: March 23 0-725/36-725: Convex Optimization Spring 205 Lecturer: Ryan Tibshirani Lecture 8: March 23 Scribes: James Duyck Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not

More information

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3)

Lecture 7. s.t. e = (u,v) E x u + x v 1 (2) v V x v 0 (3) COMPSCI 632: Approximation Algorithms September 18, 2017 Lecturer: Debmalya Panigrahi Lecture 7 Scribe: Xiang Wang 1 Overview In this lecture, we will use Primal-Dual method to design approximation algorithms

More information

11.1 Facility Location

11.1 Facility Location CS787: Advanced Algorithms Scribe: Amanda Burton, Leah Kluegel Lecturer: Shuchi Chawla Topic: Facility Location ctd., Linear Programming Date: October 8, 2007 Today we conclude the discussion of local

More information

CME307/MS&E311 Theory Summary

CME307/MS&E311 Theory Summary CME307/MS&E311 Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/

More information

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year

CONLIN & MMA solvers. Pierre DUYSINX LTAS Automotive Engineering Academic year CONLIN & MMA solvers Pierre DUYSINX LTAS Automotive Engineering Academic year 2018-2019 1 CONLIN METHOD 2 LAY-OUT CONLIN SUBPROBLEMS DUAL METHOD APPROACH FOR CONLIN SUBPROBLEMS SEQUENTIAL QUADRATIC PROGRAMMING

More information

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar

Submodularity Reading Group. Matroid Polytopes, Polymatroid. M. Pawan Kumar Submodularity Reading Group Matroid Polytopes, Polymatroid M. Pawan Kumar http://www.robots.ox.ac.uk/~oval/ Outline Linear Programming Matroid Polytopes Polymatroid Polyhedron Ax b A : m x n matrix b:

More information

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017

Section Notes 5. Review of Linear Programming. Applied Math / Engineering Sciences 121. Week of October 15, 2017 Section Notes 5 Review of Linear Programming Applied Math / Engineering Sciences 121 Week of October 15, 2017 The following list of topics is an overview of the material that was covered in the lectures

More information

Mathematical Programming and Research Methods (Part II)

Mathematical Programming and Research Methods (Part II) Mathematical Programming and Research Methods (Part II) 4. Convexity and Optimization Massimiliano Pontil (based on previous lecture by Andreas Argyriou) 1 Today s Plan Convex sets and functions Types

More information

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm

Part 4. Decomposition Algorithms Dantzig-Wolf Decomposition Algorithm In the name of God Part 4. 4.1. Dantzig-Wolf Decomposition Algorithm Spring 2010 Instructor: Dr. Masoud Yaghini Introduction Introduction Real world linear programs having thousands of rows and columns.

More information

Local and Global Minimum

Local and Global Minimum Local and Global Minimum Stationary Point. From elementary calculus, a single variable function has a stationary point at if the derivative vanishes at, i.e., 0. Graphically, the slope of the function

More information

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix.

Lecture 9. Semidefinite programming is linear programming where variables are entries in a positive semidefinite matrix. CSE525: Randomized Algorithms and Probabilistic Analysis Lecture 9 Lecturer: Anna Karlin Scribe: Sonya Alexandrova and Keith Jia 1 Introduction to semidefinite programming Semidefinite programming is linear

More information

Introduction to Constrained Optimization

Introduction to Constrained Optimization Introduction to Constrained Optimization Duality and KKT Conditions Pratik Shah {pratik.shah [at] lnmiit.ac.in} The LNM Institute of Information Technology www.lnmiit.ac.in February 13, 2013 LNMIIT MLPR

More information

Probabilistic Graphical Models

Probabilistic Graphical Models School of Computer Science Probabilistic Graphical Models Theory of Variational Inference: Inner and Outer Approximation Eric Xing Lecture 14, February 29, 2016 Reading: W & J Book Chapters Eric Xing @

More information

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 The Simplex Algorithm Instructor: Shaddin Dughmi Algorithms for Convex Optimization We will look at 2 algorithms in detail: Simplex and Ellipsoid.

More information

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality

6.854 Advanced Algorithms. Scribes: Jay Kumar Sundararajan. Duality 6.854 Advanced Algorithms Scribes: Jay Kumar Sundararajan Lecturer: David Karger Duality This lecture covers weak and strong duality, and also explains the rules for finding the dual of a linear program,

More information

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras Lecture - 35 Quadratic Programming In this lecture, we continue our discussion on

More information

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi

CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm. Instructor: Shaddin Dughmi CS675: Convex and Combinatorial Optimization Spring 2018 Consequences of the Ellipsoid Algorithm Instructor: Shaddin Dughmi Outline 1 Recapping the Ellipsoid Method 2 Complexity of Convex Optimization

More information

COMS 4771 Support Vector Machines. Nakul Verma

COMS 4771 Support Vector Machines. Nakul Verma COMS 4771 Support Vector Machines Nakul Verma Last time Decision boundaries for classification Linear decision boundary (linear classification) The Perceptron algorithm Mistake bound for the perceptron

More information

1 Unweighted Set Cover

1 Unweighted Set Cover Comp 60: Advanced Algorithms Tufts University, Spring 018 Prof. Lenore Cowen Scribe: Yuelin Liu Lecture 7: Approximation Algorithms: Set Cover and Max Cut 1 Unweighted Set Cover 1.1 Formulations There

More information

CS Foundations of Communication Complexity

CS Foundations of Communication Complexity CS 2429 - Foundations of Communication Complexity Lecturer: Toniann Pitassi 1 Applications of Communication Complexity There are many applications of communication complexity. In our survey article, The

More information

1 Linear programming relaxation

1 Linear programming relaxation Cornell University, Fall 2010 CS 6820: Algorithms Lecture notes: Primal-dual min-cost bipartite matching August 27 30 1 Linear programming relaxation Recall that in the bipartite minimum-cost perfect matching

More information

Lecture 4 Duality and Decomposition Techniques

Lecture 4 Duality and Decomposition Techniques Lecture 4 Duality and Decomposition Techniques Jie Lu (jielu@kth.se) Richard Combes Alexandre Proutiere Automatic Control, KTH September 19, 2013 Consider the primal problem Lagrange Duality Lagrangian

More information

Minimum-Polytope-Based Linear Programming Decoder for LDPC Codes via ADMM Approach

Minimum-Polytope-Based Linear Programming Decoder for LDPC Codes via ADMM Approach Minimum-Polytope-Based Linear Programg Decoder for LDPC Codes via ADMM Approach Jing Bai, Yongchao Wang, Member, IEEE, Francis C. M. Lau, Senior Member, IEEE arxiv:90.07806v [cs.it] 23 Jan 209 Abstract

More information

Semidefinite Programming Based Algorithms for Sensor Network Localization

Semidefinite Programming Based Algorithms for Sensor Network Localization Semidefinite Programming Based Algorithms for Sensor Network Localization Pratik Biswas Dept. of Electrical Engineering, Stanford University, pbiswas@stanford.edu Tzu-Chen Liang Dept. of Aeronautics And

More information

Computational Methods. Constrained Optimization

Computational Methods. Constrained Optimization Computational Methods Constrained Optimization Manfred Huber 2010 1 Constrained Optimization Unconstrained Optimization finds a minimum of a function under the assumption that the parameters can take on

More information

An SDP Approach to Multi-level Crossing Minimization

An SDP Approach to Multi-level Crossing Minimization An SDP Approach to Multi-level Crossing Minimization P. Hungerländer joint work with M. Chimani, M. Jünger, P. Mutzel University of Klagenfurt - Department of Mathematics 15th Combinatorial Optimization

More information

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007

Lecture 19: Convex Non-Smooth Optimization. April 2, 2007 : Convex Non-Smooth Optimization April 2, 2007 Outline Lecture 19 Convex non-smooth problems Examples Subgradients and subdifferentials Subgradient properties Operations with subgradients and subdifferentials

More information

Math 5593 Linear Programming Lecture Notes

Math 5593 Linear Programming Lecture Notes Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................

More information

Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I

Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I CS/EE 5590 / ENG 401 Special Topics (17804, 17815, 17803) Lec 11 Rate-Distortion Optimization (RDO) in Video Coding-I Zhu Li Course Web: http://l.web.umkc.edu/lizhu/teaching/2016sp.video-communication/main.html

More information

CME307/MS&E311 Optimization Theory Summary

CME307/MS&E311 Optimization Theory Summary CME307/MS&E311 Optimization Theory Summary Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/~yyye http://www.stanford.edu/class/msande311/

More information

K-Means and Gaussian Mixture Models

K-Means and Gaussian Mixture Models K-Means and Gaussian Mixture Models David Rosenberg New York University June 15, 2015 David Rosenberg (New York University) DS-GA 1003 June 15, 2015 1 / 43 K-Means Clustering Example: Old Faithful Geyser

More information

Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point

Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point Primal and Dual Methods for Optimisation over the Non-dominated Set of a Multi-objective Programme and Computing the Nadir Point Ethan Liu Supervisor: Professor Matthias Ehrgott Lancaster University Outline

More information

Programs. Introduction

Programs. Introduction 16 Interior Point I: Linear Programs Lab Objective: For decades after its invention, the Simplex algorithm was the only competitive method for linear programming. The past 30 years, however, have seen

More information

IE598 Big Data Optimization Summary Nonconvex Optimization

IE598 Big Data Optimization Summary Nonconvex Optimization IE598 Big Data Optimization Summary Nonconvex Optimization Instructor: Niao He April 16, 2018 1 This Course Big Data Optimization Explore modern optimization theories, algorithms, and big data applications

More information

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007

College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 College of Computer & Information Science Fall 2007 Northeastern University 14 September 2007 CS G399: Algorithmic Power Tools I Scribe: Eric Robinson Lecture Outline: Linear Programming: Vertex Definitions

More information

Demo 1: KKT conditions with inequality constraints

Demo 1: KKT conditions with inequality constraints MS-C5 Introduction to Optimization Solutions 9 Ehtamo Demo : KKT conditions with inequality constraints Using the Karush-Kuhn-Tucker conditions, see if the points x (x, x ) (, 4) or x (x, x ) (6, ) are

More information

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29

CS 473: Algorithms. Ruta Mehta. Spring University of Illinois, Urbana-Champaign. Ruta (UIUC) CS473 1 Spring / 29 CS 473: Algorithms Ruta Mehta University of Illinois, Urbana-Champaign Spring 2018 Ruta (UIUC) CS473 1 Spring 2018 1 / 29 CS 473: Algorithms, Spring 2018 Simplex and LP Duality Lecture 19 March 29, 2018

More information

6.867 Machine Learning

6.867 Machine Learning 6.867 Machine Learning Problem set 3 Due Tuesday, October 22, in class What and how to turn in? Turn in short written answers to the questions explicitly stated, and when requested to explain or prove.

More information

Lecture 2 Optimization with equality constraints

Lecture 2 Optimization with equality constraints Lecture 2 Optimization with equality constraints Constrained optimization The idea of constrained optimisation is that the choice of one variable often affects the amount of another variable that can be

More information

Integer Programming Theory

Integer Programming Theory Integer Programming Theory Laura Galli October 24, 2016 In the following we assume all functions are linear, hence we often drop the term linear. In discrete optimization, we seek to find a solution x

More information

Towards Efficient Higher-Order Semidefinite Relaxations for Max-Cut

Towards Efficient Higher-Order Semidefinite Relaxations for Max-Cut Towards Efficient Higher-Order Semidefinite Relaxations for Max-Cut Miguel F. Anjos Professor and Canada Research Chair Director, Trottier Energy Institute Joint work with E. Adams (Poly Mtl), F. Rendl,

More information

Research Interests Optimization:

Research Interests Optimization: Mitchell: Research interests 1 Research Interests Optimization: looking for the best solution from among a number of candidates. Prototypical optimization problem: min f(x) subject to g(x) 0 x X IR n Here,

More information

Cloud Radio Access Networks With Coded Caching

Cloud Radio Access Networks With Coded Caching Cloud Radio Access Networks With Coded Caching Yiğit Uğur, Zohaib Hassan Awan and Aydin Sezgin Institute of Digital Communication Systems Ruhr-Universität Bochum, 44780, Germany Email: {yigit.ugur, zohaib.awan,

More information

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs

Advanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material

More information

ALTHOUGH the Maximum Likelihood (ML) multiuser

ALTHOUGH the Maximum Likelihood (ML) multiuser 1 Branch-and-Bound-Based Fast Optimal Algorithm for Multiuser Detection in Synchronous CDMA J. Luo, K. Pattipati, P. Willett, L. Brunel Abstract A fast optimal algorithm based on the branch and bound (BBD)

More information

Theoretical Concepts of Machine Learning

Theoretical Concepts of Machine Learning Theoretical Concepts of Machine Learning Part 2 Institute of Bioinformatics Johannes Kepler University, Linz, Austria Outline 1 Introduction 2 Generalization Error 3 Maximum Likelihood 4 Noise Models 5

More information

In this lecture, we ll look at applications of duality to three problems:

In this lecture, we ll look at applications of duality to three problems: Lecture 7 Duality Applications (Part II) In this lecture, we ll look at applications of duality to three problems: 1. Finding maximum spanning trees (MST). We know that Kruskal s algorithm finds this,

More information

Lecture 19: November 5

Lecture 19: November 5 0-725/36-725: Convex Optimization Fall 205 Lecturer: Ryan Tibshirani Lecture 9: November 5 Scribes: Hyun Ah Song Note: LaTeX template courtesy of UC Berkeley EECS dept. Disclaimer: These notes have not

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming SECOND EDITION Dimitri P. Bertsekas Massachusetts Institute of Technology WWW site for book Information and Orders http://world.std.com/~athenasc/index.html Athena Scientific, Belmont,

More information

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017

Data Analysis 3. Support Vector Machines. Jan Platoš October 30, 2017 Data Analysis 3 Support Vector Machines Jan Platoš October 30, 2017 Department of Computer Science Faculty of Electrical Engineering and Computer Science VŠB - Technical University of Ostrava Table of

More information

11 Linear Programming

11 Linear Programming 11 Linear Programming 11.1 Definition and Importance The final topic in this course is Linear Programming. We say that a problem is an instance of linear programming when it can be effectively expressed

More information

Lecture 3: Camera Calibration, DLT, SVD

Lecture 3: Camera Calibration, DLT, SVD Computer Vision Lecture 3 23--28 Lecture 3: Camera Calibration, DL, SVD he Inner Parameters In this section we will introduce the inner parameters of the cameras Recall from the camera equations λx = P

More information

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods

Unconstrained Optimization Principles of Unconstrained Optimization Search Methods 1 Nonlinear Programming Types of Nonlinear Programs (NLP) Convexity and Convex Programs NLP Solutions Unconstrained Optimization Principles of Unconstrained Optimization Search Methods Constrained Optimization

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Dudley Cooke Trinity College Dublin Dudley Cooke (Trinity College Dublin) Constrained Optimization 1 / 46 EC2040 Topic 5 - Constrained Optimization Reading 1 Chapters 12.1-12.3

More information

The Alternating Direction Method of Multipliers

The Alternating Direction Method of Multipliers The Alternating Direction Method of Multipliers With Adaptive Step Size Selection Peter Sutor, Jr. Project Advisor: Professor Tom Goldstein October 8, 2015 1 / 30 Introduction Presentation Outline 1 Convex

More information

Some Advanced Topics in Linear Programming

Some Advanced Topics in Linear Programming Some Advanced Topics in Linear Programming Matthew J. Saltzman July 2, 995 Connections with Algebra and Geometry In this section, we will explore how some of the ideas in linear programming, duality theory,

More information

A Brief Look at Optimization

A Brief Look at Optimization A Brief Look at Optimization CSC 412/2506 Tutorial David Madras January 18, 2018 Slides adapted from last year s version Overview Introduction Classes of optimization problems Linear programming Steepest

More information

Approximation Algorithms

Approximation Algorithms Approximation Algorithms Group Members: 1. Geng Xue (A0095628R) 2. Cai Jingli (A0095623B) 3. Xing Zhe (A0095644W) 4. Zhu Xiaolu (A0109657W) 5. Wang Zixiao (A0095670X) 6. Jiao Qing (A0095637R) 7. Zhang

More information

Lecture 16 October 23, 2014

Lecture 16 October 23, 2014 CS 224: Advanced Algorithms Fall 2014 Prof. Jelani Nelson Lecture 16 October 23, 2014 Scribe: Colin Lu 1 Overview In the last lecture we explored the simplex algorithm for solving linear programs. While

More information