Rank Minimization over Finite Fields
|
|
- Ginger Price
- 5 years ago
- Views:
Transcription
1 Rank Minimization over Finite Fields Vincent Y. F. Tan Laura Balzano, Stark C. Draper Department of Electrical and Computer Engineering, University of Wisconsin-Madison ISIT 2011 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
2 Connection between Two Areas of Study Area of Matrix Completion Rank-Metric Codes Study Rank Minimization Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
3 Connection between Two Areas of Study Area of Matrix Completion Rank-Metric Codes Study Rank Minimization Applications Collaborative Filtering Crisscross Error Correction Minimal Realization Network Coding Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
4 Connection between Two Areas of Study Area of Matrix Completion Rank-Metric Codes Study Rank Minimization Applications Collaborative Filtering Crisscross Error Correction Minimal Realization Network Coding Field Real R, Complex C Finite Field F q Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
5 Connection between Two Areas of Study Area of Matrix Completion Rank-Metric Codes Study Rank Minimization Applications Collaborative Filtering Crisscross Error Correction Minimal Realization Network Coding Field Real R, Complex C Finite Field F q Techniques Functional Analysis Algebraic coding Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
6 Connection between Two Areas of Study Area of Matrix Completion Rank-Metric Codes Study Rank Minimization Applications Collaborative Filtering Crisscross Error Correction Minimal Realization Network Coding Field Real R, Complex C Finite Field F q Techniques Functional Analysis Algebraic coding Decoding Convex Optimization Berlekamp-Massey Nuclear Norm Error Trapping Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
7 Connection between Two Areas of Study Area of Matrix Completion Rank-Metric Codes Study Rank Minimization Applications Collaborative Filtering Crisscross Error Correction Minimal Realization Network Coding Field Real R, Complex C Finite Field F q Techniques Functional Analysis Algebraic coding Decoding Convex Optimization Berlekamp-Massey Nuclear Norm Error Trapping Can we draw analogies between the two areas of study? Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
8 Connection between Two Areas of Study Area of Matrix Completion Rank-Metric Codes Study Rank Minimization Applications Collaborative Filtering Crisscross Error Correction Minimal Realization Network Coding Field Real R, Complex C Finite Field F q Techniques Functional Analysis Algebraic coding Decoding Convex Optimization Berlekamp-Massey Nuclear Norm Error Trapping Can we draw analogies between the two areas of study? Rank Minimization over finite fields Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
9 Minimum rank distance decoding Transmit some codeword C C A rank-metric code C is a non-empty subset of F m n q the rank distance endowed with Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
10 Minimum rank distance decoding Transmit some codeword C C A rank-metric code C is a non-empty subset of F m n q the rank distance Receive some received matrix R F m n q endowed with Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
11 Minimum rank distance decoding Transmit some codeword C C A rank-metric code C is a non-empty subset of F m n q the rank distance Receive some received matrix R F m n q Decoding problem (under mild conditions) is Ĉ = arg min C C rank(r C) Minimum distance decoding since rank induces a metric endowed with Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
12 Minimum rank distance decoding Transmit some codeword C C A rank-metric code C is a non-empty subset of F m n q the rank distance Receive some received matrix R F m n q Decoding problem (under mild conditions) is Ĉ = arg min C C rank(r C) Minimum distance decoding since rank induces a metric endowed with X R C is known as the error matrix assumed low-rank Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
13 Minimum rank distance decoding Transmit some codeword C C A rank-metric code C is a non-empty subset of F m n q the rank distance Receive some received matrix R F m n q Decoding problem (under mild conditions) is Ĉ = arg min C C rank(r C) Minimum distance decoding since rank induces a metric endowed with X R C is known as the error matrix assumed low-rank Assume linear code. Rank minimization over finite field: ˆX = arg min X coset rank(x) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
14 When do low-rank errors occur? Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
15 When do low-rank errors occur? Probabilistic crisscross error correction Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
16 When do Probabilistic low-rank errors Crisscross occur? Error Correction Probabilistic crisscross error correction Roth, IT Trans 1997 Ron M. Roth, Member, IEEE Abstract The crisscross error model in data arrays is considered, where the corrupted symbols are confined to a prescribed number of rows or columns (or both). Under the additional assumption that the corrupted entries are uniformly distributed over the channel alphabet, and by allowing a small decoding error probability, a coding scheme is presented where the redundancy can get close to one half the redundancy required in minimumdistance decoding of crisscross errors. Index Terms Crisscross errors, probabilistic coding, rank metric. I. INTRODUCTION ONSIDER an application where information symbols C(such as bits or bytes) are stored in arrays, with the possibility of some of the symbols recorded erroneously. The error patterns are such that all corrupted symbols are confined to a prescribed number of rows or columns (or both). We refer to such an error model as crisscross errors. A crisscross error pattern that is confined to two rows and three columns Fig. 1. Typical crisscross error pattern. is shown in Fig. 1. Crisscross Dataerrorstorage can be found applications: in various data storage Data stored in arrays applications; see, for instance, [3], [5], [6], [11], [16] [19]. arrays over is the cover weight of their difference. An Such errors may occur in memory chip arrays, where row array code over is a -dimensional linear or column failures occur due to the malfunctioning of row subspace of the vector space of all matrices over Error patterns confined to two rows and three columns above drivers or column amplifiers. Crisscross errors can also be such that is the smallest cover distance between any found in helical tapes, where the tracks are recorded in a two distinct elements of or, equivalently, the smallest cover direction which is (conceptually) perpendicular to the direction weight of any nonzero element of. The parameter is of the movement of the tape; misalignment of the reading head referred to as the minimum cover distance of and the term causes burst errors to occur along the track (and across the stands for the redundancy of. tape), whereas scratches on the tape usually occur along the The Singleton bound on the minimum cover distance states tape Vincent (and across Tan the (UW-Madison) tracks). Crisscross error-correcting Rank Minimization codes thatover the minimum Finite Fields cover distance and the redundancy ISITof 2011 any 4 / 20
17 When do Probabilistic low-rank errors Crisscross occur? Error Correction Probabilistic crisscross error correction Roth, IT Trans 1997 Ron M. Roth, Member, IEEE Abstract The crisscross error model in data arrays is considered, where the corrupted symbols are confined to a prescribed number of rows or columns (or both). Under the additional assumption that the corrupted entries are uniformly distributed over the channel alphabet, and by allowing a small decoding error probability, a coding scheme is presented where the redundancy can get close to one half the redundancy required in minimumdistance decoding of crisscross errors. Index Terms Crisscross errors, probabilistic coding, rank metric. I. INTRODUCTION ONSIDER an application where information symbols C(such as bits or bytes) are stored in arrays, with the possibility of some of the symbols recorded erroneously. The error patterns are such that all corrupted symbols are confined to a prescribed number of rows or columns (or both). We refer to such an error model as crisscross errors. A crisscross error pattern that is confined to two rows and three columns Fig. 1. Typical crisscross error pattern. is shown in Fig. 1. Crisscross Dataerrorstorage can be found applications: in various data storage Data stored in arrays applications; see, for instance, [3], [5], [6], [11], [16] [19]. arrays over is the cover weight of their difference. An Such errors may occur in memory chip arrays, where row array code over is a -dimensional linear or column failures occur due to the malfunctioning of row subspace of the vector space of all matrices over Error patterns confined to two rows and three columns above drivers or column amplifiers. Crisscross errors can also be such that is the smallest cover distance between any found in helical tapes, where the tracks are recorded in a two distinct elements of or, equivalently, the smallest cover direction which is (conceptually) perpendicular to the direction weight of any nonzero element of. The parameter is of the movement Low-rank of the tape; errors misalignment of the reading head referred to as the minimum cover distance of and the term causes burst errors to occur along the track (and across the stands for the redundancy of. tape), whereas scratches on the tape usually occur along the The Singleton bound on the minimum cover distance states tape Vincent (and across Tan the (UW-Madison) tracks). Crisscross error-correcting Rank Minimization codes thatover the minimum Finite Fields cover distance and the redundancy ISITof 2011 any 4 / 20
18 Problem Setup: Rank minimization over finite fields Square matrix X F n n q has rank r n Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
19 Problem Setup: Rank minimization over finite fields Square matrix X F n n q has rank r n!!!" Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
20 Problem Setup: Rank minimization over finite fields Square matrix X F n n q has rank r n!!!" Assume that the sensing matrices H 1,..., H k F n n q are random. Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
21 Problem Setup: Rank minimization over finite fields Square matrix X F n n q has rank r n!!!" Assume that the sensing matrices H 1,..., H k F n n q are random. Arithmetic is performed in the field F q y a F q Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
22 Problem Setup: Rank minimization over finite fields Square matrix X F n n q has rank r n!!!" Assume that the sensing matrices H 1,..., H k F n n q are random. Arithmetic is performed in the field F q y a F q Given (y k, H k ), find necessary and sufficient conditions on k and sensing model such that recovery is reliable, i.e., P(E n ) 0 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
23 Main Results k: Num. of linear measurements n: Dim. of matrix X r: Max. rank of matrix X Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
24 Main Results k: Num. of linear measurements n: Dim. of matrix X r: Max. rank of matrix X γ = r n : Rank-dimension ratio Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
25 Main Results k: Num. of linear measurements n: Dim. of matrix X r: Max. rank of matrix X γ = r n : Rank-dimension ratio Critical Quantity 2γ (1 γ/2) n 2 = 2rn r 2 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
26 Main Results k: Num. of linear measurements n: Dim. of matrix X r: Max. rank of matrix X γ = r n : Rank-dimension ratio Critical Quantity 2γ (1 γ/2) n 2 = 2rn r 2 Result Statement Consequence Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
27 Main Results k: Num. of linear measurements n: Dim. of matrix X r: Max. rank of matrix X γ = r n : Rank-dimension ratio Critical Quantity 2γ (1 γ/2) n 2 = 2rn r 2 Result Statement Consequence Converse k < (2 ε)γ(1 γ/2)n 2 P(E n ) 0 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
28 Main Results k: Num. of linear measurements n: Dim. of matrix X r: Max. rank of matrix X γ = r n : Rank-dimension ratio Critical Quantity 2γ (1 γ/2) n 2 = 2rn r 2 Result Statement Consequence Converse k < (2 ε)γ(1 γ/2)n 2 P(E n ) 0 P(E n ) 0 Achievability (Uniform) k > (2 + ε)γ(1 γ/2)n 2 P(E n ) q n2 E(R) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
29 Main Results k: Num. of linear measurements n: Dim. of matrix X r: Max. rank of matrix X γ = r n : Rank-dimension ratio Critical Quantity 2γ (1 γ/2) n 2 = 2rn r 2 Result Statement Consequence Converse k < (2 ε)γ(1 γ/2)n 2 P(E n ) 0 P(E n ) 0 Achievability (Uniform) k > (2 + ε)γ(1 γ/2)n 2 P(E n ) q n2 E(R) Achievability (Sparse) k > (2 + ε)γ(1 γ/2)n 2 P(E n ) 0 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
30 Main Results k: Num. of linear measurements n: Dim. of matrix X r: Max. rank of matrix X γ = r n : Rank-dimension ratio Critical Quantity 2γ (1 γ/2) n 2 = 2rn r 2 Result Statement Consequence Converse k < (2 ε)γ(1 γ/2)n 2 P(E n ) 0 P(E n ) 0 Achievability (Uniform) k > (2 + ε)γ(1 γ/2)n 2 P(E n ) q n2 E(R) Achievability (Sparse) k > (2 + ε)γ(1 γ/2)n 2 P(E n ) 0 Achievability (Noisy) k (3 + ε)(γ + σ)n 2 P(E n ) 0 (q assumed large) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
31 A necessary condition on number of measurements Given k measurements y a F q and sensing matrices H a F n n q, we want a necessary condition for reliable recovery of X. Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
32 A necessary condition on number of measurements Given k measurements y a F q and sensing matrices H a F n n q, we want a necessary condition for reliable recovery of X. Proposition (Converse) Assume X drawn uniformly at random from all matrices in F n n q Sensing matrices H a, a = 1,..., k jointly independent of X r/n γ (constant) If the number of measurements satisfies ( k < (2 ε)γ 1 γ ) n 2 2 then P(ˆX X) ε/2 for all n sufficiently large. of rank r Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
33 A sensing model: Uniform model Now we assume that X is non-random Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
34 A sensing model: Uniform model Now we assume that X is non-random rank(x) r = γn Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
35 A sensing model: Uniform model Now we assume that X is non-random rank(x) r = γn!!!" Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
36 A sensing model: Uniform model Now we assume that X is non-random rank(x) r = γn!!!" Each entry of each sensing matrix H a is i.i.d. and has a uniform distribution in F q : P([H a ] i,j = h) = 1 q, h F q Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
37 The min-rank decoder We employ the min-rank decoder minimize rank( X) subject to H a, X = y a, a = 1,..., k Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
38 The min-rank decoder We employ the min-rank decoder minimize rank( X) subject to H a, X = y a, a = 1,..., k NP-hard, combinatorial Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
39 The min-rank decoder We employ the min-rank decoder minimize rank( X) subject to H a, X = y a, a = 1,..., k NP-hard, combinatorial Denote the set of optimizers as S Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
40 The min-rank decoder We employ the min-rank decoder minimize rank( X) subject to H a, X = y a, a = 1,..., k NP-hard, combinatorial Denote the set of optimizers as S Define the error event: E n := { S > 1} ({ S = 1} {X X}) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
41 The min-rank decoder We employ the min-rank decoder minimize rank( X) subject to H a, X = y a, a = 1,..., k NP-hard, combinatorial Denote the set of optimizers as S Define the error event: E n := { S > 1} ({ S = 1} {X X}) We want the solution to be unique and correct Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
42 Achievability under uniform model Proposition (Achievability under uniform model) Assume Sensing matrices H a drawn uniformly Min-rank decoder is used r/n γ (constant) If the number of measurements satisfies ( k > (2 + ε)γ 1 γ ) n 2 2 then P(E n ) 0. Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
43 Proof Sketch E n = { Z, H a = X, H a, a = 1,..., k} Z X:rank(Z) rank(x) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
44 Proof Sketch E n = { Z, H a = X, H a, a = 1,..., k} Z X:rank(Z) rank(x) By the union bound, the error probability can be bounded as P(E n ) P( Z, H a = X, H a, a = 1,..., k) Z X:rank(Z) rank(x) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
45 Proof Sketch E n = { Z, H a = X, H a, a = 1,..., k} Z X:rank(Z) rank(x) By the union bound, the error probability can be bounded as P(E n ) P( Z, H a = X, H a, a = 1,..., k) Z X:rank(Z) rank(x) At the same time, by uniformity and independence, P( Z, H a = X, H a, a = 1,..., k) = q k Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
46 Proof Sketch E n = Z X:rank(Z) rank(x) { Z, H a = X, H a, a = 1,..., k} By the union bound, the error probability can be bounded as P(E n ) P( Z, H a = X, H a, a = 1,..., k) Z X:rank(Z) rank(x) At the same time, by uniformity and independence, P( Z, H a = X, H a, a = 1,..., k) = q k That the number of n n matrices over F q of rank r is bounded above by 4q 2γ(1 γ/2)n2 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
47 Proof Sketch E n = Z X:rank(Z) rank(x) { Z, H a = X, H a, a = 1,..., k} By the union bound, the error probability can be bounded as P(E n ) P( Z, H a = X, H a, a = 1,..., k) Z X:rank(Z) rank(x) At the same time, by uniformity and independence, P( Z, H a = X, H a, a = 1,..., k) = q k That the number of n n matrices over F q of rank r is bounded above by 4q 2γ(1 γ/2)n2 Remark: Can be extended to noisy case Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
48 The Reliability Function Definition The rate of a sequence of linear measurement models is defined as R := lim n 1 k n 2, k = # measurements Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
49 The Reliability Function Definition The rate of a sequence of linear measurement models is defined as R := lim n 1 k n 2, k = # measurements Analogy to coding: The rate of the code is lower bounded by 1 k/n 2. C = {C : C, H a = 0, a = 1,..., k} Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
50 The Reliability Function Definition The rate of a sequence of linear measurement models is defined as R := lim n 1 k n 2, k = # measurements Analogy to coding: The rate of the code C = {C : C, H a = 0, a = 1,..., k} is lower bounded by 1 k/n 2. Definition The reliability function of the min-rank decoder is defined as E(R) := lim n 1 n 2 log q P(E n) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
51 The Reliability Function Our achievability proof gives a lower bound on E(R) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
52 The Reliability Function Our achievability proof gives a lower bound on E(R) Upper bound on E(R) is more tricky. But... Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
53 The Reliability Function Our achievability proof gives a lower bound on E(R) Upper bound on E(R) is more tricky. But... Proposition (Reliability Function of Min-Rank Decoder) Assume Sensing matrices H a drawn uniformly Min-rank decoder is used r/n γ (constant) Then, E(R) = ( (1 R) 2γ 1 γ ) + 2 Note x + := max{x, 0}. Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
54 Interpretation E(R) k (1 n 2 2γ γ ) + 2 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
55 Interpretation E(R) k (1 n 2 2γ γ ) + 2 The more the ratio k exceeds 2γ ( 1 γ ) n 2 2 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
56 Interpretation E(R) k (1 n 2 2γ γ ) + 2 The more the ratio k exceeds 2γ ( 1 γ ) n 2 2 The larger E(R) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
57 Interpretation E(R) k (1 n 2 2γ γ ) + 2 The more the ratio k exceeds 2γ ( 1 γ ) n 2 2 The larger E(R) The faster P(E n ) decays Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
58 Interpretation E(R) k (1 n 2 2γ γ ) + 2 The more the ratio k exceeds 2γ ( 1 γ ) n 2 2 The larger E(R) The faster P(E n ) decays Linear relationship for the min-rank decoder Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
59 Interpretation E(R) k (1 n 2 2γ γ ) + 2 The more the ratio k exceeds 2γ ( 1 γ ) n 2 2 The larger E(R) The faster P(E n ) decays Linear relationship for the min-rank decoder de Caen s lower bound: Let B 1,..., B M be events: ( M ) M P(B m ) 2 P B m M m =1 P(B m B m ). m=1 m=1 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
60 Interpretation E(R) k (1 n 2 2γ γ ) + 2 The more the ratio k exceeds 2γ ( 1 γ ) n 2 2 The larger E(R) The faster P(E n ) decays Linear relationship for the min-rank decoder de Caen s lower bound: Let B 1,..., B M be events: ( M ) M P(B m ) 2 P B m M m =1 P(B m B m ). m=1 m=1 Exploit pairwise independence to make statements about error exponents (linear codes achieve capacity in symmetric DMCs) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
61 Sparse sensing model Assume as usual that X is non-random Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
62 Sparse sensing model Assume as usual that X is non-random rank(x) r = γn Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
63 Sparse sensing model Assume as usual that X is non-random rank(x) r = γn Sensing matrices are sparse!!!" Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
64 Sparse sensing model Assume as usual that X is non-random rank(x) r = γn Sensing matrices are sparse!!!" Arithmetic still performed in F q Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
65 Sparse sensing model Each entry of each sensing matrix H a is i.i.d. and has a δ-sparse distribution in F q : 1 δ 0 δ/(q 1) δ/(q 1) δ/(q 1) δ/(q 1) { 1 δ h = 0 P([H a ] ij = h) = δ q 1 h 0 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
66 Sparse sensing model Each entry of each sensing matrix H a is i.i.d. and has a δ-sparse distribution in F q : 1 δ 0 δ/(q 1) δ/(q 1) δ/(q 1) δ/(q 1) { 1 δ h = 0 P([H a ] ij = h) = δ q 1 h 0 Fewer adds and multiplies since H a sparse Encoding cheaper Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
67 Sparse sensing model Each entry of each sensing matrix H a is i.i.d. and has a δ-sparse distribution in F q : 1 δ 0 δ/(q 1) δ/(q 1) δ/(q 1) δ/(q 1) { 1 δ h = 0 P([H a ] ij = h) = δ q 1 h 0 Fewer adds and multiplies since H a sparse Encoding cheaper May help in decoding via message-passing algorithms Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
68 Sparse sensing model Each entry of each sensing matrix H a is i.i.d. and has a δ-sparse distribution in F q : 1 δ 0 δ/(q 1) δ/(q 1) δ/(q 1) δ/(q 1) { 1 δ h = 0 P([H a ] ij = h) = δ q 1 h 0 Fewer adds and multiplies since H a sparse Encoding cheaper May help in decoding via message-passing algorithms How fast can δ, the sparsity factor, decay with n for reliable recovery? Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
69 Sparse sensing model Problem becomes more challenging Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
70 Sparse sensing model Problem becomes more challenging X is not sensed as much Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
71 Sparse sensing model Problem becomes more challenging X is not sensed as much Measurements y k do not contain as much information about X Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
72 Sparse sensing model Problem becomes more challenging X is not sensed as much Measurements y k do not contain as much information about X The equality P( Z, H a = X, H a, a = 1,..., k) = q k no longer holds Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
73 Sparse sensing model Problem becomes more challenging X is not sensed as much Measurements y k do not contain as much information about X The equality P( Z, H a = X, H a, a = 1,..., k) = q k no longer holds Nevertheless... Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
74 Achievability under sparse model Theorem (Achievability under sparse model) Assume Sensing matrices H a drawn according to δ-sparse distribution Min-rank decoder is used r/n γ (constant) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
75 Achievability under sparse model Theorem (Achievability under sparse model) Assume Sensing matrices H a drawn according to δ-sparse distribution Min-rank decoder is used r/n γ (constant) If the sparsity factor δ = δ n satisfies ( ) log n δ Ω o(1) n Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
76 Achievability under sparse model Theorem (Achievability under sparse model) Assume Sensing matrices H a drawn according to δ-sparse distribution Min-rank decoder is used r/n γ (constant) If the sparsity factor δ = δ n satisfies ( ) log n δ Ω o(1) n and the number of measurements satisfies ( k > (2 + ε)γ 1 γ ) n 2 2 then P(E n ) 0. Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
77 Proof Strategy Approximate with uniform model Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
78 Proof Strategy Approximate with uniform model P(E n ) P( Z, H a = X, H a, a = 1,..., k) Z X:rank(Z) rank(x) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
79 Proof Strategy Approximate with uniform model P(E n ) P( Z, H a = X, H a, a = 1,..., k) Z X:rank(Z) rank(x) Z X 0 βn 2 Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
80 Proof Strategy Approximate with uniform model P(E n ) P( Z, H a = X, H a, a = 1,..., k) Z X:rank(Z) rank(x) Z X 0 βn 2 + Z X:rank(Z) rank(x) Z X 0 >βn 2 P( Z, H a = X, H a, a = 1,..., k) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
81 Proof Strategy Approximate with uniform model P(E n ) P( Z, H a = X, H a, a = 1,..., k) Z X:rank(Z) rank(x) Z X 0 βn 2 + Z X:rank(Z) rank(x) Z X 0 >βn 2 P( Z, H a = X, H a, a = 1,..., k) First term Few terms Low Hamming weight can afford loose bound on probability Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
82 Proof Strategy Approximate with uniform model P(E n ) P( Z, H a = X, H a, a = 1,..., k) Z X:rank(Z) rank(x) Z X 0 βn 2 + Z X:rank(Z) rank(x) Z X 0 >βn 2 P( Z, H a = X, H a, a = 1,..., k) First term Few terms Low Hamming weight can afford loose bound on probability Second term Many terms Tight bound on probability (circular convolution of δ-sparse pmf) Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
83 Proof Strategy Approximate with uniform model P(E n ) P( Z, H a = X, H a, a = 1,..., k) Z X:rank(Z) rank(x) Z X 0 βn 2 + Z X:rank(Z) rank(x) Z X 0 >βn 2 P( Z, H a = X, H a, a = 1,..., k) First term Few terms Low Hamming weight can afford loose bound on probability Second term Many terms Tight bound on probability (circular convolution of δ-sparse pmf) Set β = Θ( δ log n ) to be the optimal split Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
84 Concluding remarks Derived necessary and sufficient conditions for rank minimization over finite fields Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
85 Concluding remarks Derived necessary and sufficient conditions for rank minimization over finite fields Analyzed the sparse sensing model Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
86 Concluding remarks Derived necessary and sufficient conditions for rank minimization over finite fields Analyzed the sparse sensing model Minimum rank distance properties of rank-metric codes [analogous to Barg and Forney (2002)] were derived Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
87 Concluding remarks Derived necessary and sufficient conditions for rank minimization over finite fields Analyzed the sparse sensing model Minimum rank distance properties of rank-metric codes [analogous to Barg and Forney (2002)] were derived Drew analogies between number of measurements and minimum distance properties Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
88 Concluding remarks Derived necessary and sufficient conditions for rank minimization over finite fields Analyzed the sparse sensing model Minimum rank distance properties of rank-metric codes [analogous to Barg and Forney (2002)] were derived Drew analogies between number of measurements and minimum distance properties Open Problem: Can the sparsity level of ( ) log n Ω n be improved (reduced) further? Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
89 Concluding remarks Derived necessary and sufficient conditions for rank minimization over finite fields Analyzed the sparse sensing model Minimum rank distance properties of rank-metric codes [analogous to Barg and Forney (2002)] were derived Drew analogies between number of measurements and minimum distance properties Open Problem: Can the sparsity level of ( ) log n Ω n be improved (reduced) further? Vincent Tan (UW-Madison) Rank Minimization over Finite Fields ISIT / 20
A Connection between Network Coding and. Convolutional Codes
A Connection between Network Coding and 1 Convolutional Codes Christina Fragouli, Emina Soljanin christina.fragouli@epfl.ch, emina@lucent.com Abstract The min-cut, max-flow theorem states that a source
More informationOutline Introduction Problem Formulation Proposed Solution Applications Conclusion. Compressed Sensing. David L Donoho Presented by: Nitesh Shroff
Compressed Sensing David L Donoho Presented by: Nitesh Shroff University of Maryland Outline 1 Introduction Compressed Sensing 2 Problem Formulation Sparse Signal Problem Statement 3 Proposed Solution
More informationSummary of Raptor Codes
Summary of Raptor Codes Tracey Ho October 29, 2003 1 Introduction This summary gives an overview of Raptor Codes, the latest class of codes proposed for reliable multicast in the Digital Fountain model.
More informationNumerical Optimization
Convex Sets Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Let x 1, x 2 R n, x 1 x 2. Line and line segment Line passing through x 1 and x 2 : {y
More informationPolar Codes for Noncoherent MIMO Signalling
ICC Polar Codes for Noncoherent MIMO Signalling Philip R. Balogun, Ian Marsland, Ramy Gohary, and Halim Yanikomeroglu Department of Systems and Computer Engineering, Carleton University, Canada WCS IS6
More informationLinear Block Codes. Allen B. MacKenzie Notes for February 4, 9, & 11, Some Definitions
Linear Block Codes Allen B. MacKenzie Notes for February 4, 9, & 11, 2015 This handout covers our in-class study of Chapter 3 of your textbook. We ll introduce some notation and then discuss the generator
More informationTHREE LECTURES ON BASIC TOPOLOGY. 1. Basic notions.
THREE LECTURES ON BASIC TOPOLOGY PHILIP FOTH 1. Basic notions. Let X be a set. To make a topological space out of X, one must specify a collection T of subsets of X, which are said to be open subsets of
More informationThe AWGN Red Alert Problem
The AWGN Red Alert Problem Bobak Nazer, Yanina Shkel, and Stark C. Draper ECE Department Boston University ECE Department University of Wisconsin, Madison ITA 2011 February 10, 2011 Nazer, Shkel, Draper
More informationMath 5593 Linear Programming Lecture Notes
Math 5593 Linear Programming Lecture Notes Unit II: Theory & Foundations (Convex Analysis) University of Colorado Denver, Fall 2013 Topics 1 Convex Sets 1 1.1 Basic Properties (Luenberger-Ye Appendix B.1).........................
More informationHowever, this is not always true! For example, this fails if both A and B are closed and unbounded (find an example).
98 CHAPTER 3. PROPERTIES OF CONVEX SETS: A GLIMPSE 3.2 Separation Theorems It seems intuitively rather obvious that if A and B are two nonempty disjoint convex sets in A 2, then there is a line, H, separating
More informationAn incomplete survey of matrix completion. Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison
An incomplete survey of matrix completion Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Abstract Setup: Matrix Completion M = M ij known for black cells M ij unknown for
More informationMaths for Signals and Systems Linear Algebra in Engineering. Some problems by Gilbert Strang
Maths for Signals and Systems Linear Algebra in Engineering Some problems by Gilbert Strang Problems. Consider u, v, w to be non-zero vectors in R 7. These vectors span a vector space. What are the possible
More information7. The Gauss-Bonnet theorem
7. The Gauss-Bonnet theorem 7.1 Hyperbolic polygons In Euclidean geometry, an n-sided polygon is a subset of the Euclidean plane bounded by n straight lines. Thus the edges of a Euclidean polygon are formed
More informationMOST attention in the literature of network codes has
3862 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 56, NO. 8, AUGUST 2010 Efficient Network Code Design for Cyclic Networks Elona Erez, Member, IEEE, and Meir Feder, Fellow, IEEE Abstract This paper introduces
More informationMeasurements and Bits: Compressed Sensing meets Information Theory. Dror Baron ECE Department Rice University dsp.rice.edu/cs
Measurements and Bits: Compressed Sensing meets Information Theory Dror Baron ECE Department Rice University dsp.rice.edu/cs Sensing by Sampling Sample data at Nyquist rate Compress data using model (e.g.,
More informationDetecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference
Detecting Burnscar from Hyperspectral Imagery via Sparse Representation with Low-Rank Interference Minh Dao 1, Xiang Xiang 1, Bulent Ayhan 2, Chiman Kwan 2, Trac D. Tran 1 Johns Hopkins Univeristy, 3400
More informationAnalyzing the Peeling Decoder
Analyzing the Peeling Decoder Supplemental Material for Advanced Channel Coding Henry D. Pfister January 5th, 01 1 Introduction The simplest example of iterative decoding is the peeling decoder introduced
More information4. Simplicial Complexes and Simplicial Homology
MATH41071/MATH61071 Algebraic topology Autumn Semester 2017 2018 4. Simplicial Complexes and Simplicial Homology Geometric simplicial complexes 4.1 Definition. A finite subset { v 0, v 1,..., v r } R n
More information1. Introduction. performance of numerical methods. complexity bounds. structural convex optimization. course goals and topics
1. Introduction EE 546, Univ of Washington, Spring 2016 performance of numerical methods complexity bounds structural convex optimization course goals and topics 1 1 Some course info Welcome to EE 546!
More informationCombinatorial Selection and Least Absolute Shrinkage via The CLASH Operator
Combinatorial Selection and Least Absolute Shrinkage via The CLASH Operator Volkan Cevher Laboratory for Information and Inference Systems LIONS / EPFL http://lions.epfl.ch & Idiap Research Institute joint
More informationLecture 15: The subspace topology, Closed sets
Lecture 15: The subspace topology, Closed sets 1 The Subspace Topology Definition 1.1. Let (X, T) be a topological space with topology T. subset of X, the collection If Y is a T Y = {Y U U T} is a topology
More informationThe convex geometry of inverse problems
The convex geometry of inverse problems Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Venkat Chandrasekaran Pablo Parrilo Alan Willsky Linear Inverse Problems
More informationLecture 19. Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer
CS-621 Theory Gems November 21, 2012 Lecture 19 Lecturer: Aleksander Mądry Scribes: Chidambaram Annamalai and Carsten Moldenhauer 1 Introduction We continue our exploration of streaming algorithms. First,
More informationIn what follows, we will focus on Voronoi diagrams in Euclidean space. Later, we will generalize to other distance spaces.
Voronoi Diagrams 4 A city builds a set of post offices, and now needs to determine which houses will be served by which office. It would be wasteful for a postman to go out of their way to make a delivery
More informationUnlabeled equivalence for matroids representable over finite fields
Unlabeled equivalence for matroids representable over finite fields November 16, 2012 S. R. Kingan Department of Mathematics Brooklyn College, City University of New York 2900 Bedford Avenue Brooklyn,
More informationLecture 2 September 3
EE 381V: Large Scale Optimization Fall 2012 Lecture 2 September 3 Lecturer: Caramanis & Sanghavi Scribe: Hongbo Si, Qiaoyang Ye 2.1 Overview of the last Lecture The focus of the last lecture was to give
More informationLecture 19 Subgradient Methods. November 5, 2008
Subgradient Methods November 5, 2008 Outline Lecture 19 Subgradients and Level Sets Subgradient Method Convergence and Convergence Rate Convex Optimization 1 Subgradients and Level Sets A vector s is a
More informationMDS (maximum-distance separable) codes over large
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 55, NO 4, APRIL 2009 1721 Cyclic Lowest Density MDS Array Codes Yuval Cassuto, Member, IEEE, and Jehoshua Bruck, Fellow, IEEE Abstract Three new families of
More informationLecture and notes by: Nate Chenette, Brent Myers, Hari Prasad November 8, Property Testing
Property Testing 1 Introduction Broadly, property testing is the study of the following class of problems: Given the ability to perform (local) queries concerning a particular object (e.g., a function,
More information1. Represent each of these relations on {1, 2, 3} with a matrix (with the elements of this set listed in increasing order).
Exercises Exercises 1. Represent each of these relations on {1, 2, 3} with a matrix (with the elements of this set listed in increasing order). a) {(1, 1), (1, 2), (1, 3)} b) {(1, 2), (2, 1), (2, 2), (3,
More informationChapter S:V. V. Formal Properties of A*
Chapter S:V V. Formal Properties of A* Properties of Search Space Graphs Auxiliary Concepts Roadmap Completeness of A* Admissibility of A* Efficiency of A* Monotone Heuristic Functions S:V-1 Formal Properties
More informationChapter 4. Matrix and Vector Operations
1 Scope of the Chapter Chapter 4 This chapter provides procedures for matrix and vector operations. This chapter (and Chapters 5 and 6) can handle general matrices, matrices with special structure and
More informationError-Correcting Codes
Error-Correcting Codes Michael Mo 10770518 6 February 2016 Abstract An introduction to error-correcting codes will be given by discussing a class of error-correcting codes, called linear block codes. The
More informationHomework #5 Solutions Due: July 17, 2012 G = G = Find a standard form generator matrix for a code equivalent to C.
Homework #5 Solutions Due: July 7, Do the following exercises from Lax: Page 4: 4 Page 34: 35, 36 Page 43: 44, 45, 46 4 Let C be the (5, 3) binary code with generator matrix G = Find a standard form generator
More informationPolytopes Course Notes
Polytopes Course Notes Carl W. Lee Department of Mathematics University of Kentucky Lexington, KY 40506 lee@ms.uky.edu Fall 2013 i Contents 1 Polytopes 1 1.1 Convex Combinations and V-Polytopes.....................
More informationEfficiently decodable insertion/deletion codes for high-noise and high-rate regimes
Efficiently decodable insertion/deletion codes for high-noise and high-rate regimes Venkatesan Guruswami Carnegie Mellon University Pittsburgh, PA 53 Email: guruswami@cmu.edu Ray Li Carnegie Mellon University
More information60 2 Convex sets. {x a T x b} {x ã T x b}
60 2 Convex sets Exercises Definition of convexity 21 Let C R n be a convex set, with x 1,, x k C, and let θ 1,, θ k R satisfy θ i 0, θ 1 + + θ k = 1 Show that θ 1x 1 + + θ k x k C (The definition of convexity
More informationGiovanni De Micheli. Integrated Systems Centre EPF Lausanne
Two-level Logic Synthesis and Optimization Giovanni De Micheli Integrated Systems Centre EPF Lausanne This presentation can be used for non-commercial purposes as long as this note and the copyright footers
More informationLecture 17. Lower bound for variable-length source codes with error. Coding a sequence of symbols: Rates and scheme (Arithmetic code)
Lecture 17 Agenda for the lecture Lower bound for variable-length source codes with error Coding a sequence of symbols: Rates and scheme (Arithmetic code) Introduction to universal codes 17.1 variable-length
More informationMathematics and Computer Science
Technical Report TR-2006-010 Revisiting hypergraph models for sparse matrix decomposition by Cevdet Aykanat, Bora Ucar Mathematics and Computer Science EMORY UNIVERSITY REVISITING HYPERGRAPH MODELS FOR
More informationNon-exhaustive, Overlapping k-means
Non-exhaustive, Overlapping k-means J. J. Whang, I. S. Dhilon, and D. F. Gleich Teresa Lebair University of Maryland, Baltimore County October 29th, 2015 Teresa Lebair UMBC 1/38 Outline Introduction NEO-K-Means
More informationDM545 Linear and Integer Programming. Lecture 2. The Simplex Method. Marco Chiarandini
DM545 Linear and Integer Programming Lecture 2 The Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 3. 4. Standard Form Basic Feasible Solutions
More informationA Toolkit for List Recovery. Nick Wasylyshyn A THESIS. Mathematics
A Toolkit for List Recovery Nick Wasylyshyn A THESIS in Mathematics Presented to the Faculties of the University of Pennsylvania in Partial Fulfillment of the Requirements for the Degree of Master of Arts
More informationEC500. Design of Secure and Reliable Hardware. Lecture 9. Mark Karpovsky
EC500 Design of Secure and Reliable Hardware Lecture 9 Mark Karpovsky 1 1 Arithmetical Codes 1.1 Detection and Correction of errors in arithmetical channels (adders, multipliers, etc) Let = 0,1,,2 1 and
More informationTopological properties of convex sets
Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 5: Topological properties of convex sets 5.1 Interior and closure of convex sets Let
More informationFast Approximation of Matrix Coherence and Statistical Leverage
Fast Approximation of Matrix Coherence and Statistical Leverage Michael W. Mahoney Stanford University ( For more info, see: http:// cs.stanford.edu/people/mmahoney/ or Google on Michael Mahoney ) Statistical
More informationGenerell Topologi. Richard Williamson. May 6, 2013
Generell Topologi Richard Williamson May 6, 2013 1 8 Thursday 7th February 8.1 Using connectedness to distinguish between topological spaces I Proposition 8.1. Let (, O ) and (Y, O Y ) be topological spaces.
More informationLOW-density parity-check (LDPC) codes are widely
1460 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL 53, NO 4, APRIL 2007 Tree-Based Construction of LDPC Codes Having Good Pseudocodeword Weights Christine A Kelley, Member, IEEE, Deepak Sridhara, Member,
More informationarxiv: v2 [math.oc] 29 Jan 2015
Minimum Cost Input/Output Design for Large-Scale Linear Structural Systems Sérgio Pequito, Soummya Kar A. Pedro Aguiar, 1 arxiv:1407.3327v2 [math.oc] 29 Jan 2015 Abstract In this paper, we provide optimal
More information3 No-Wait Job Shops with Variable Processing Times
3 No-Wait Job Shops with Variable Processing Times In this chapter we assume that, on top of the classical no-wait job shop setting, we are given a set of processing times for each operation. We may select
More informationInterleaving Schemes on Circulant Graphs with Two Offsets
Interleaving Schemes on Circulant raphs with Two Offsets Aleksandrs Slivkins Department of Computer Science Cornell University Ithaca, NY 14853 slivkins@cs.cornell.edu Jehoshua Bruck Department of Electrical
More informationSection 6.3: Further Rules for Counting Sets
Section 6.3: Further Rules for Counting Sets Often when we are considering the probability of an event, that event is itself a union of other events. For example, suppose there is a horse race with three
More informationMultiresponse Sparse Regression with Application to Multidimensional Scaling
Multiresponse Sparse Regression with Application to Multidimensional Scaling Timo Similä and Jarkko Tikka Helsinki University of Technology, Laboratory of Computer and Information Science P.O. Box 54,
More informationArithmetic in Quaternion Algebras
Arithmetic in Quaternion Algebras 31st Automorphic Forms Workshop Jordan Wiebe University of Oklahoma March 6, 2017 Jordan Wiebe (University of Oklahoma) Arithmetic in Quaternion Algebras March 6, 2017
More informationComputing over Multiple-Access Channels with Connections to Wireless Network Coding
ISIT 06: Computing over MACS 1 / 20 Computing over Multiple-Access Channels with Connections to Wireless Network Coding Bobak Nazer and Michael Gastpar Wireless Foundations Center Department of Electrical
More informationAdvanced Operations Research Techniques IE316. Quiz 1 Review. Dr. Ted Ralphs
Advanced Operations Research Techniques IE316 Quiz 1 Review Dr. Ted Ralphs IE316 Quiz 1 Review 1 Reading for The Quiz Material covered in detail in lecture. 1.1, 1.4, 2.1-2.6, 3.1-3.3, 3.5 Background material
More informationTwo-graphs revisited. Peter J. Cameron University of St Andrews Modern Trends in Algebraic Graph Theory Villanova, June 2014
Two-graphs revisited Peter J. Cameron University of St Andrews Modern Trends in Algebraic Graph Theory Villanova, June 2014 History The icosahedron has six diagonals, any two making the same angle (arccos(1/
More informationThe Probabilistic Method
The Probabilistic Method Po-Shen Loh June 2010 1 Warm-up 1. (Russia 1996/4 In the Duma there are 1600 delegates, who have formed 16000 committees of 80 persons each. Prove that one can find two committees
More informationPolynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication
Polynomial Codes: an Optimal Design for High-Dimensional Coded Matrix Multiplication Qian Yu, Mohammad Ali Maddah-Ali, and A. Salman Avestimehr Department of Electrical Engineering, University of Southern
More informationEmbeddability of Arrangements of Pseudocircles into the Sphere
Embeddability of Arrangements of Pseudocircles into the Sphere Ronald Ortner Department Mathematik und Informationstechnologie, Montanuniversität Leoben, Franz-Josef-Straße 18, 8700-Leoben, Austria Abstract
More informationError correction guarantees
Error correction guarantees Drawback of asymptotic analyses Valid only as long as the incoming messages are independent. (independence assumption) The messages are independent for l iterations only if
More informationDescribe the two most important ways in which subspaces of F D arise. (These ways were given as the motivation for looking at subspaces.
Quiz Describe the two most important ways in which subspaces of F D arise. (These ways were given as the motivation for looking at subspaces.) What are the two subspaces associated with a matrix? Describe
More informationBELOW, we consider decoding algorithms for Reed Muller
4880 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 11, NOVEMBER 2006 Error Exponents for Recursive Decoding of Reed Muller Codes on a Binary-Symmetric Channel Marat Burnashev and Ilya Dumer, Senior
More informationarxiv: v1 [math.co] 22 Dec 2016
THE LERAY DIMENSION OF A CONVEX CODE CARINA CURTO AND RAMÓN VERA arxiv:607797v [mathco] Dec 06 ABSTRACT Convex codes were recently introduced as models for neural codes in the brain Any convex code C has
More informationGraphs and Network Flows IE411. Lecture 21. Dr. Ted Ralphs
Graphs and Network Flows IE411 Lecture 21 Dr. Ted Ralphs IE411 Lecture 21 1 Combinatorial Optimization and Network Flows In general, most combinatorial optimization and integer programming problems are
More informationThe convex geometry of inverse problems
The convex geometry of inverse problems Benjamin Recht Department of Computer Sciences University of Wisconsin-Madison Joint work with Venkat Chandrasekaran Pablo Parrilo Alan Willsky Linear Inverse Problems
More informationSubspace Clustering by Block Diagonal Representation
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Subspace Clustering by Block Diagonal Representation Canyi Lu, Student Member, IEEE, Jiashi Feng, Zhouchen Lin, Senior Member, IEEE, Tao Mei,
More informationTOPOLOGY, DR. BLOCK, FALL 2015, NOTES, PART 3.
TOPOLOGY, DR. BLOCK, FALL 2015, NOTES, PART 3. 301. Definition. Let m be a positive integer, and let X be a set. An m-tuple of elements of X is a function x : {1,..., m} X. We sometimes use x i instead
More informationTreewidth and graph minors
Treewidth and graph minors Lectures 9 and 10, December 29, 2011, January 5, 2012 We shall touch upon the theory of Graph Minors by Robertson and Seymour. This theory gives a very general condition under
More informationMatrix algorithms: fast, stable, communication-optimizing random?!
Matrix algorithms: fast, stable, communication-optimizing random?! Ioana Dumitriu Department of Mathematics University of Washington (Seattle) Joint work with Grey Ballard, James Demmel, Olga Holtz, Robert
More informationSubspace Clustering by Block Diagonal Representation
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE Subspace Clustering by Block Diagonal Representation Canyi Lu, Student Member, IEEE, Jiashi Feng, Zhouchen Lin, Senior Member, IEEE, Tao Mei,
More informationThe External Network Problem
The External Network Problem Jan van den Heuvel and Matthew Johnson CDAM Research Report LSE-CDAM-2004-15 December 2004 Abstract The connectivity of a communications network can often be enhanced if the
More informationTELCOM2125: Network Science and Analysis
School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2015 2 Part 4: Dividing Networks into Clusters The problem l Graph partitioning
More informationSection 5 Convex Optimisation 1. W. Dai (IC) EE4.66 Data Proc. Convex Optimisation page 5-1
Section 5 Convex Optimisation 1 W. Dai (IC) EE4.66 Data Proc. Convex Optimisation 1 2018 page 5-1 Convex Combination Denition 5.1 A convex combination is a linear combination of points where all coecients
More informationCHAPTER 3 FUZZY RELATION and COMPOSITION
CHAPTER 3 FUZZY RELATION and COMPOSITION The concept of fuzzy set as a generalization of crisp set has been introduced in the previous chapter. Relations between elements of crisp sets can be extended
More informationExact adaptive parallel algorithms for data depth problems. Vera Rosta Department of Mathematics and Statistics McGill University, Montreal
Exact adaptive parallel algorithms for data depth problems Vera Rosta Department of Mathematics and Statistics McGill University, Montreal joint work with Komei Fukuda School of Computer Science McGill
More informationarxiv: v1 [math.co] 27 Feb 2015
Mode Poset Probability Polytopes Guido Montúfar 1 and Johannes Rauh 2 arxiv:1503.00572v1 [math.co] 27 Feb 2015 1 Max Planck Institute for Mathematics in the Sciences, Inselstraße 22, 04103 Leipzig, Germany,
More informationConvex Optimization MLSS 2015
Convex Optimization MLSS 2015 Constantine Caramanis The University of Texas at Austin The Optimization Problem minimize : f (x) subject to : x X. The Optimization Problem minimize : f (x) subject to :
More informationMultidimensional Decoding Networks for Trapping Set Analysis
Multidimensional Decoding Networks for Trapping Set Analysis Allison Beemer* and Christine A. Kelley University of Nebraska-Lincoln, Lincoln, NE, U.S.A. allison.beemer@huskers.unl.edu Abstract. We present
More informationGraph drawing in spectral layout
Graph drawing in spectral layout Maureen Gallagher Colleen Tygh John Urschel Ludmil Zikatanov Beginning: July 8, 203; Today is: October 2, 203 Introduction Our research focuses on the use of spectral graph
More informationPebble Sets in Convex Polygons
2 1 Pebble Sets in Convex Polygons Kevin Iga, Randall Maddox June 15, 2005 Abstract Lukács and András posed the problem of showing the existence of a set of n 2 points in the interior of a convex n-gon
More informationComputing the Minimum Hamming Distance for Z 2 Z 4 -Linear Codes
Computing the Minimum Hamming Distance for Z 2 Z 4 -Linear Codes Marta Pujol and Mercè Villanueva Combinatorics, Coding and Security Group (CCSG) Universitat Autònoma de Barcelona (UAB) VIII JMDA, Almería
More informationEfficient Universal Recovery in Broadcast Networks
Efficient Universal Recovery in Broadcast Networks Thomas Courtade and Rick Wesel UCLA September 30, 2010 Courtade and Wesel (UCLA) Efficient Universal Recovery Allerton 2010 1 / 19 System Model and Problem
More informationAn I/O-Complexity Lower Bound for All Recursive Matrix Multiplication Algorithms by Path-Routing. Jacob N. Scott
An I/O-Complexity Lower Bound for All Recursive Matrix Multiplication Algorithms by Path-Routing by Jacob N. Scott A dissertation submitted in partial satisfaction of the requirements for the degree of
More informationRobust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma
Robust Face Recognition via Sparse Representation Authors: John Wright, Allen Y. Yang, Arvind Ganesh, S. Shankar Sastry, and Yi Ma Presented by Hu Han Jan. 30 2014 For CSE 902 by Prof. Anil K. Jain: Selected
More informationPOLYHEDRAL GEOMETRY. Convex functions and sets. Mathematical Programming Niels Lauritzen Recall that a subset C R n is convex if
POLYHEDRAL GEOMETRY Mathematical Programming Niels Lauritzen 7.9.2007 Convex functions and sets Recall that a subset C R n is convex if {λx + (1 λ)y 0 λ 1} C for every x, y C and 0 λ 1. A function f :
More informationIntroduction to Topics in Machine Learning
Introduction to Topics in Machine Learning Namrata Vaswani Department of Electrical and Computer Engineering Iowa State University Namrata Vaswani 1/ 27 Compressed Sensing / Sparse Recovery: Given y :=
More informationFlexible Coloring. Xiaozhou Li a, Atri Rudra b, Ram Swaminathan a. Abstract
Flexible Coloring Xiaozhou Li a, Atri Rudra b, Ram Swaminathan a a firstname.lastname@hp.com, HP Labs, 1501 Page Mill Road, Palo Alto, CA 94304 b atri@buffalo.edu, Computer Sc. & Engg. dept., SUNY Buffalo,
More informationT325 Summary T305 T325 B BLOCK 4 T325. Session 3. Dr. Saatchi, Seyed Mohsen. Prepared by:
T305 T325 B BLOCK 4 T325 Summary Prepared by: Session 3 [Type Dr. Saatchi, your address] Seyed Mohsen [Type your phone number] [Type your e-mail address] Dr. Saatchi, Seyed Mohsen T325 Error Control Coding
More informationLecture 5: Simplicial Complex
Lecture 5: Simplicial Complex 2-Manifolds, Simplex and Simplicial Complex Scribed by: Lei Wang First part of this lecture finishes 2-Manifolds. Rest part of this lecture talks about simplicial complex.
More informationMATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix.
MATH 423 Linear Algebra II Lecture 17: Reduced row echelon form (continued). Determinant of a matrix. Row echelon form A matrix is said to be in the row echelon form if the leading entries shift to the
More informationSection 17. Closed Sets and Limit Points
17. Closed Sets and Limit Points 1 Section 17. Closed Sets and Limit Points Note. In this section, we finally define a closed set. We also introduce several traditional topological concepts, such as limit
More informationMinimum Delay Packet-sizing for Linear Multi-hop Networks with Cooperative Transmissions
Minimum Delay acket-sizing for inear Multi-hop Networks with Cooperative Transmissions Ning Wen and Randall A. Berry Department of Electrical Engineering and Computer Science Northwestern University, Evanston,
More information1 Undirected Vertex Geography UVG
Geography Start with a chip sitting on a vertex v of a graph or digraph G. A move consists of moving the chip to a neighbouring vertex. In edge geography, moving the chip from x to y deletes the edge (x,
More informationIntegrated Math I. IM1.1.3 Understand and use the distributive, associative, and commutative properties.
Standard 1: Number Sense and Computation Students simplify and compare expressions. They use rational exponents and simplify square roots. IM1.1.1 Compare real number expressions. IM1.1.2 Simplify square
More informationVoronoi Region. K-means method for Signal Compression: Vector Quantization. Compression Formula 11/20/2013
Voronoi Region K-means method for Signal Compression: Vector Quantization Blocks of signals: A sequence of audio. A block of image pixels. Formally: vector example: (0.2, 0.3, 0.5, 0.1) A vector quantizer
More informationLecture Notes 2: The Simplex Algorithm
Algorithmic Methods 25/10/2010 Lecture Notes 2: The Simplex Algorithm Professor: Yossi Azar Scribe:Kiril Solovey 1 Introduction In this lecture we will present the Simplex algorithm, finish some unresolved
More informationPhil 320 Chapter 1: Sets, Functions and Enumerability I. Sets Informally: a set is a collection of objects. The objects are called members or
Phil 320 Chapter 1: Sets, Functions and Enumerability I. Sets Informally: a set is a collection of objects. The objects are called members or elements of the set. a) Use capital letters to stand for sets
More informationCHAPTER 8. Copyright Cengage Learning. All rights reserved.
CHAPTER 8 RELATIONS Copyright Cengage Learning. All rights reserved. SECTION 8.3 Equivalence Relations Copyright Cengage Learning. All rights reserved. The Relation Induced by a Partition 3 The Relation
More informationLecture 2. Topology of Sets in R n. August 27, 2008
Lecture 2 Topology of Sets in R n August 27, 2008 Outline Vectors, Matrices, Norms, Convergence Open and Closed Sets Special Sets: Subspace, Affine Set, Cone, Convex Set Special Convex Sets: Hyperplane,
More information