Coding Theorems for Reversible Embedding
|
|
- Nora Collins
- 5 years ago
- Views:
Transcription
1 Coding Theorems for Reversible Embedding Frans Willems and Ton Kalker T.U. Eindhoven, Philips Research DIMACS, March 16-19, 2003
2 Outline 1. Gelfand-Pinsker coding theorem 2. Noise-free embedding 3. Reversible embedding 4. Robust and reversible embedding 5. Partially reversible embedding 6. Remarks 1
3 I. The Gelfand-Pinsker Coding Theorem W Y N Z N Y N = e(w, X N ) Ŵ = d(z N ) Pc(z y, x) X N Ps(x) Messages: Pr{W = w} = 1/M for w {1, 2,, M}. Side information: Pr{X N = x N } = Π N n=1 P s(xn) for x N X N. Channel: discrete memoryless {Y X, Pc(z y, x), Z}. Error probability: P E = Pr{Ŵ W }. Rate: R = 1 N log 2(M). Ŵ 2
4 Capacity The side-information capacity C si is the largest ρ such that for all ɛ > 0 there exist for all large enough N encoders and decoders with R ρ ɛ and P E ɛ. THEOREM (Gelfand-Pinsker [1980]): C si = max Pt(u,y x) I(U; Z) I(U; X). (1) Achievability proof: Fix a test-channel Pt(u, y x). Consider sets Aɛ( ) of strongly typical sequences, etc. (a) For each message index w {1,, 2 NR }, generate 2 NR u sequences u N at random according to P (u) = x,y Ps(x)Pt(u, y x). Give these sequences the label w. (b) When message index w has to be transmitted choose a sequence u N having label w such that (u N, x N ) Aɛ(U, X). Such a sequence exists almost always if Ru > I(U; X) (roughly). 3
5 (c) The input sequence y N results from applying the channel P (y u, x) = Pt(y, u x)/ y Pt(u, y x) to u N and x N. Then y N is transmitted. (d) The decoder upon receiving z N, looks for the unique sequence u N such that (u N, z N ) Aɛ(U, Z). If R + Ru < I(U; Z) (roughly) such a unique sequence exists. The message index is the label of u N. Conclusion is that R < I(U; Z) I(U; X) is achievable. Observations A: As an intermediate result the decoder recovers the sequence u N. B: The transmitted u N is jointly typical with the side-info sequence x N, i.e. (u N, x N ) Aɛ(U, X) thus their joint composition is OK. Note that P (u, x) = y Ps(x)Pt(u, y x). 4
6 II. Noise-free Embedding X N Y N Ps(x) Y N = e(w, X N ) W Ŵ = d(y N ) Ŵ Messages: Pr{W = w} = 1 M for w {1, 2,, M}. Source (host): Pr{X N = x N } = Π n=1,n Ps(xn) for x N X N. Error probability: P E = Pr{Ŵ W }. Rate: R = 1 N log 2(M). Embedding distortion: Dxy = E[ N 1 n=1,n Dxy(Xn, en(w, X N ))] for some distortion matrix {Dxy(x, y), x X, y Y}. 5
7 Achievable region noise-free embedding A rate-distortion pair (ρ, xy) is said to be achievable if for all ɛ > 0 there exists for all large enough N encoders and decoders such that R ρ ɛ, Dxy xy + ɛ, P E ɛ. THEOREM (Chen [2000], Barron [2000]): The set of achievable rate-distortion pairs is equal to G nfe which is defined as G nfe = {(ρ, xy) : 0 ρ H(Y X), xy x,y P (x, y)dxy(x, y), for P (x, y) = Ps(x)Pt(y x)}. (2) Again {X, Pt(y x), Y} is called test-channel. 6
8 Proof: Achievability: In the Gelfand-Pinsker achievability proof, note that Z = Y (noiseless channel) and take the auxiliary random variable U = Y. Then (x N, y N ) Aɛ(X, Y ) hence Dxy is OK. For the embedding rate we obtain R = I(U; Z) I(U; X) = I(Y ; Y ) I(Y ; X) = H(Y X). Converse: Rate part: log2(m) H(W ) H(W Ŵ ) + Fano term H(W X N ) H(W X N, Y N ) + Fano term = I(W ; Y N X N ) + Fano term H(Y N X N ) + Fano term n=1,n H(Yn Xn) + Fano term NH(Y X) + Fano term, 7
9 where X and Y are random variables with Pr{(X, Y ) = (x, y)} = 1 N n=1,n for x X and y Y. Note that for x X Pr{(Xn, Yn) = (x, y)}, Pr{X = x} = Ps(x). Distortion part: Dxy = x N,y N Pr{(X N, Y N ) = (x N, y N )} 1 N n Dxy(xn, yn) = x,y Pr{(X, Y ) = (x, y)}dxy(x, y). Let P E 0, etc. 8
10 III. Reversible Embedding X N Y N Ps(x) Y N = e(w, X N ) W (Ŵ, ˆX N 1 ) = d(y N ) Ŵ ˆX 1 N Messages: Pr{W = w} = 1 M for w {1, 2,, M}. Source (host): Pr{X N = x N } = Π n=1,n Ps(xn) for x N X N. Error probability: P E = Pr{Ŵ W ˆX N 1 XN }. Rate: R = 1 N log 2(M). Embedding distortion: Dxy = E[ N 1 n=1,n Dxy(Xn, en(w, X N ))] for some distortion matrix {Dxy(x, y), x X, y Y}. Inspired by Fridrich, Goljan, and Du, Lossless data embedding for all image formats, Proc. SPIE, Security and Watermarking of Multimedia Contents, San Jose, CA,
11 Achievable region for reversible embedding A rate-distortion pair (ρ, xy) is said to be achievable if for all ɛ > 0 there exists for all large enough N encoders and decoders such that R ρ ɛ, Dxy xy + ɛ, P E ɛ. RESULT (Kalker-Willems [2002]): The set of achievable rate-distortion pairs is equal to Gre which is defined as Gre = {(ρ, xy) : 0 ρ H(Y ) H(X), xy x,y P (x, y)dxy(x, y), for P (x, y) = Ps(x)Pt(y x)}. (3) Note that {X, Pt(y x), Y} is the test channel. 10
12 Proof: Achievability: In the Gelfand-Pinsker achievability proof, note that Z = Y (noiseless channel) and take the auxiliary random variable U = [X, Y ]. Then x N can be reconstructed by the decoder and (x N, y N ) Aɛ(X, Y ) hence Dxy is OK. For the embedding rate we obtain R = I(U; Z) I(U; X) = I([X, Y ]; Y ) I([X, Y ]; X) = H(Y ) H(X). Converse: Rate part: log2(m) H(W ) H(W, X N Ŵ, ˆX 1 N ) + Fano term = H(W, X N ) H(W, X N Ŵ, ˆX 1 N ) H(XN ) + Fano term H(W, X N ) H(W, X N Y N, Ŵ, ˆX N 1 ) H(XN ) + Fano term = I(W, X N ; Y N ) H(X N ) + Fano term = H(Y N ) H(X N ) + Fano term n=1,n [H(Yn) H(Xn)] + Fano term N[H(Y ) H(X)] + Fano term, 11
13 where X and Y are random variables with Pr{(X, Y ) = (x, y)} = 1 N n=1,n for x X and y Y. Note that for x X Pr{(Xn, Yn) = (x, y)}, Pr{X = x} = Ps(x). Distortion part: Dxy = x N,y N Pr{(X N, Y N ) = (x N, y N )} 1 N n Dxy(xn, yn) = x,y Pr{(X, Y ) = (x, y)}dxy(x, y). Let P E 0, etc. 12
14 Example: Binary source, Hamming distortion px 1 py d1 1 X Y 0 d0 0 Since we can write xy pxd1 + (1 px)d0 py = px(1 d1) + (1 px)d0 py xy + px(1 2d1). Assume w.l.o.g. that px 1/2. First let xy be such that xy +px 1/2 or xy 1/2 px. Then we have py xy + px 1/2, 13
15 and hence ρ h(py) h(px) h(px + xy) h(px). However ρ = h(px + xy) h(px) is achievable with xy by taking d1 = 0 and d0 = xy 1 px. Note that the test channel is not symmetric and that d0 = xy 1 px 1/2 p x 1 px 1/2. For xy + px 1/2 the rate is bounded as ρ 1 h(px) but also achievable. 14
16 15 Horizontal axis xy, vertical axis ρ, for px = 0.2. Maximum embedding rate 1 h(0.2) DISTORTION RATE in BITS Plot of rate-distortion region Gre
17 Another perspective y N (k) y N (k + 1) w(k) w(k + 1) x N (k) x N (k + 1) Consider a blocked system with blocks of length N. In block k message bits can be (noise-free) embedded with rate H(Y X) and corresponding distortion. Then in block k + 1 message bits are embedded that allow for reconstruction of x N (k) given y N (k). This requires NH(X Y ) bits. Therefore the resulting embedding rate is R = H(Y X) H(X Y ) = H(Y, X) H(X) H(X Y ) = H(Y ) H(X). 16
18 IV. Robust and Reversible Embedding X N Y N Ps(x) Y N = e(w, X N ) W Pc(z y) Z N (Ŵ, ˆX N 1 ) = d(zn ) Ŵ ˆX 1 N Messages: Pr{W = w} = 1 M for w {1, 2,, M}. Source (host): Pr{X N = x N } = Π n=1,n Ps(xn) for x N X N. Channel: discrete memoryless {Y, Pc(z y), Z}. Error probability: P E = Pr{Ŵ W ˆX N 1 XN }. Rate: R = 1 N log 2(M). Embedding distortion: Dxy = E[ N 1 n=1,n Dxy(Xn, en(w, X N ))] for some distortion matrix {Dxy(x, y), x X, y Y}. 17
19 Achievable region for robust and reversible embedding A rate-distortion pair (ρ, xy) is said to be achievable if for all ɛ > 0 there exists for all large enough N encoders and decoders such that R ρ ɛ, Dxy xy + ɛ, P E ɛ. RESULT (Willems-Kalker [2003]): The set of achievable rate-distortion pairs is equal to Grre which is defined as Grre = {(ρ, xy) : 0 ρ I(Y ; Z) H(X), xy x,y P (x, y)dxy(x, y), for P (x, y, z) = Ps(x)Pt(y x)pc(z y)}. (4) 18
20 Proof: Achievability: In the Gelfand-Pinsker achievability proof again take the auxiliary random variable U = [X, Y ]. Then x N can be reconstructed by the decoder and since (x N, y N ) Aɛ(X, Y ) the embedding distortion Dxy is OK. For the embedding rate we obtain R = I(U; Z) I(U; X) = I([X, Y ]; Z) I([X, Y ]; X) = I(Y ; Z) H(X). Converse: Rate part: log2(m) H(W ) H(W, X N Ŵ, ˆX 1 N ) + Fano term = H(W, X N ) H(W, X N Ŵ, ˆX 1 N ) H(XN ) + Fano term H(W, X N ) H(W, X N Z N, Ŵ, ˆX N 1 ) H(XN ) + Fano term = I(W, X N ; Z N ) H(X N ) + Fano term = I(Y N ; Z N ) H(X N ) + Fano term n=1,n [I(Yn; Zn) H(Xn)] + Fano term N[I(Y ; Z) H(X)] + Fano term, 19
21 where X, Y and Z are random variables with Pr{(X, Y, Z) = (x, y, z)} = 1 N n=1,n for x X, y Y, and z Z. Note that for x X Pr{(Xn, Yn, Zn) = (x, y, z)}, Pr{X = x} = Ps(x). and for y Y and z Z Pr{Z = z Y = y} = Pc(z y). Distortion part: Dxy = x N,y N Pr{(X N, Y N ) = (x N, y N )} 1 N n Dxy(xn, yn) = x,y Pr{(X, Y ) = (x, y)}dxy(x, y). Let P E 0, etc. 20
22 Example: Binary source, Hamming distortion, binary symmetric channel px py 1 pz d1 1 α 1 X Y Z d0 α Similar analysis as before. 21
23 22 Horizontal axis xy, vertical axis ρ for px = α = 0.1. Minimal distortion 0.218, maximum embedding rate 1 h(0.1) h(0.1) DISTORTION RATE in BITS Plot of achievable region Grre
24 The zero-rate case: Robustification X N Y N Ps(x) Y N = e(x N ) Z N ˆX 1 N Pc(z y) ˆX N 1 = d(zn ) Source (host): Pr{X N = x N } = Π n=1,n Ps(xn) for x N X N. Channel: discrete memoryless {Y, Pc(z y), Z}. Error probability: P E = Pr{ ˆX N 1 XN }. Robustification distortion: Dxy = E[ N 1 n=1,n Dxy(Xn, en(w, X N ))] for some distortion matrix {Dxy(x, y), x X, y Y}. 23
25 Achievable distortions for robustification A distortion xy is said to be achievable if for all ɛ > 0 there exists for all large enough N encoders and decoders such that Dxy xy + ɛ, P E ɛ. RESULT: The set of achievable distortions is equal to G rob which is defined as G rob = { xy : xy x,y P (x, y)dxy(x, y), for P (x, y, z) = Ps(x)Pt(y x)pc(z y) such that H(X) I(Y ; Z)}. (5) Related to Shannon s separation principle! Robustification is not possible if H(X) > max P I(Y ; Z). t(y) 24
26 V. Partially Reversible Embedding X N Y N Ps(x) Y N = e(w, X N ) W Ŵ = d(y N ) V N = f(y N ) Ŵ V N Messages: Pr{W = w} = 1 M for w {1, 2,, M}. Source (host): Pr{X N = x N } = Π n=1,n Ps(xn) for x N X N. Error probability: P E = Pr{Ŵ W }. Rate: R = 1 N log 2(M). Embedding distortion: Dxy = E[ N 1 n=1,n Dxy(Xn, en(w, X N ))] for some distortion matrix {Dxy(x, y), x X, y Y}. Restoration distortion: Dxv = E[ N 1 n=1,n Dxv(Xn, fn(y N ))] for a distortion matrix {Dxv(x, v), x X, z V}. 25
27 Achievable region for partially reversible embedding A rate-distortion triple (ρ, xy, xv) is said to be achievable if for all ɛ > 0 there exists for all large enough N encoders and decoders such that R ρ ɛ, Dxy xy + ɛ, Dxv xv + ɛ, P E ɛ. RESULT (Willems-Kalker [2002]): The set of achievable rate-distortion triples is given by Gpre which is defined as Gpre = {(ρ, xy, xv) : 0 ρ H(Y ) I(X; Y, V ), xy P (x, y, v)dxy(x, y), x,y,v xv x,y,v P (x, y, v)dxv(x, v), for P (x, y, v) = Ps(x)Pt(y, v x)}. (6) 26
28 Proof: Achievability: In the Gelfand-Pinsker achievability proof, note again that Z = Y (noiseless channel) and take the auxiliary random variable U = [Y, V ]. Then v N can be reconstructed by the decoder and since (x N, y N, v N ) Aɛ(X, Y, V ) both Dxy and Dxv are OK. For the embedding rate we obtain R = I(U; Z) I(U; X) = I([Y, V ]; Y ) I([Y, V ]; X) = H(Y ) I(X; Y, V ). Converse: Rate part: log2(m) = H(Y N, V N, W ) H(Y N, V N W ) H(Y N ) + H(W Ŵ ) I(Y N, V N ; X N W ) H(Y N ) H(X N W ) + H(X N W, Y N, V N ) + Fano term H(Y N ) H(X N ) + H(X N Y N, V N ) + Fano term H(Yn) H(Xn) + H(Xn Yn, Vn) + Fano term n=1,n n=1,n n=1,n N[H(Y ) H(X) + H(X Y, V )] + Fano term = N[H(Y ) I(X; Y, V )] + Fano term, 27
29 where X, Y and V are random variables with Pr{(X, Y, V ) = (x, y, v)} = 1 N n=1,n for x X, y Y, and v V. Note that for x X Pr{(Xn, Yn, Vn) = (x, y, v)}, Pr{X = x} = Ps(x). Distortion parts: Dxy = x N,y N Pr{(X N, Y N ) = (x N, y N )} 1 N n Dxy(xn, yn) = x,y Pr{(X, Y ) = (x, y)}dxy(x, y). Dxv = x N,v N Pr{(X N, V N ) = (x N, v N )} 1 N n Dxv(xn, vn) = x,v Pr{(X, V ) = (x, v)}dxv(x, v). Let P E 0, etc. 28
30 The other perspective again Consider a blocked system with blocks of length N. In block k a message can be (noise-free) embedded with rate H(Y X) and the corresponding distortion. Then in block k + 1 data is embedded that specifies a restoration sequence v N (k) given y N (k). This requires NI(X; V Y ) bits. Therefore the remaining embedding rate is R = H(Y X) I(X; V Y ) = H(Y X) H(X Y ) + H(X Y, V ) = H(Y ) H(X) + H(X Y, V ) = H(Y ) I(X; Y, V ). 29
31 The zero-rate case: Self-Embedding Ps(x) X N Y N Y N = e(x N ) V N = f(y N ) V N Source (host): Pr{X N = x N } = Π n=1,n Ps(xn) for x N X N. Embedding distortion: Dxy = E[ N 1 n=1,n Dxy(Xn, en(w, X N ))] for some distortion matrix {Dxy(x, y), x X, y Y}. Restoration distortion: Dxv = E[ N 1 n=1,n Dxv(Xn, fn(y N ))] for a distortion matrix {Dxv(x, v), x X, z V}. 30
32 Achievable distortions for self-embedding A distortion pair ( xy, xv) is said to be achievable if for all ɛ > 0 there exists for all large enough N encoders and decoders such that Dxy xy + ɛ, Dxv xv + ɛ. RESULT (Willems-Kalker [2002]): The set of achievable distortion pairs is equal to Gse which is defined as Gse = {( xy, xv) : xy x,y,v xv x,y,v P (x, y, v)dxy(x, y), P (x, y, v)dxv(x, v), for P (x, y, v) = Ps(x)Pt(y, v x) such that H(Y ) I(X; Y, V )}. (7) Self-embedding is putting a vector quantizer into a scalar quantizer. Or making an abstract index to a restoration vector v N meaningful. 31
33 VI. Remarks 1. Our results are related to results of Sutivong, Cover, et al. Slightly different setups however. Embedding distortion. 2. We cannot do the partially reversible AND robust case. An achievable region would be similar to the Sutivong, Cover, Chiang, Kim [2002] region. No converse. 3. Coding techniques for the reversible case have been studied (with Deran Maas [2002]). 4. Open problems: (A) Arimoto-Blahut methods to compute the ratedistortion functions, (B) Coding techniques, especially for the zerorate cases. 32
Tamper-proofing proofing of Electronic and Printed Text Documents via Robust Hashing and Data-Hiding
Tamper-proofing proofing of Electronic and Printed Text Documents via Robust Hashing and Data-Hiding R Villán, S Voloshynovskiy, O Koval, F Deguillaume, and T Pun Stochastic Image Processing Group Computer
More informationRecursive Estimation
Recursive Estimation Raffaello D Andrea Spring 28 Problem Set : Probability Review Last updated: March 6, 28 Notes: Notation: Unless otherwise noted, x, y, and z denote random variables, p x denotes the
More informationReversible Data Hiding VIA Optimal Code for Image
Vol. 3, Issue. 3, May - June 2013 pp-1661-1665 ISSN: 2249-6645 Reversible Data Hiding VIA Optimal Code for Image Senthil Rani D. #, Gnana Kumari R. * # PG-Scholar, M.E-CSE, Coimbatore Institute of Engineering
More informationLecture 8: Jointly distributed random variables
Lecture : Jointly distributed random variables Random Vectors and Joint Probability Distributions Definition: Random Vector. An n-dimensional random vector, denoted as Z = (Z, Z,, Z n ), is a function
More informationConvexity Theory and Gradient Methods
Convexity Theory and Gradient Methods Angelia Nedić angelia@illinois.edu ISE Department and Coordinated Science Laboratory University of Illinois at Urbana-Champaign Outline Convex Functions Optimality
More informationMultibit Digital Watermarking Robust against Local Nonlinear Geometrical Distortions
Multibit Digital Watermarking Robust against Local Nonlinear Geometrical Distortions Sviatoslav Voloshynovskiy Frédéric Deguillaume, and Thierry Pun CUI - University of Geneva 24, rue du Général Dufour
More information1 Overview. 2 Applications of submodular maximization. AM 221: Advanced Optimization Spring 2016
AM : Advanced Optimization Spring 06 Prof. Yaron Singer Lecture 0 April th Overview Last time we saw the problem of Combinatorial Auctions and framed it as a submodular maximization problem under a partition
More informationFrequency Band Coding Mode Selection for Key Frames of Wyner-Ziv Video Coding
2009 11th IEEE International Symposium on Multimedia Frequency Band Coding Mode Selection for Key Frames of Wyner-Ziv Video Coding Ghazaleh R. Esmaili and Pamela C. Cosman Department of Electrical and
More informationLecture 17: Continuous Functions
Lecture 17: Continuous Functions 1 Continuous Functions Let (X, T X ) and (Y, T Y ) be topological spaces. Definition 1.1 (Continuous Function). A function f : X Y is said to be continuous if the inverse
More informationContents. 3 Vector Quantization The VQ Advantage Formulation Optimality Conditions... 48
Contents Part I Prelude 1 Introduction... 3 1.1 Audio Coding... 4 1.2 Basic Idea... 6 1.3 Perceptual Irrelevance... 8 1.4 Statistical Redundancy... 9 1.5 Data Modeling... 9 1.6 Resolution Challenge...
More informationInformation Theory and Communication
Information Theory and Communication Shannon-Fano-Elias Code and Arithmetic Codes Ritwik Banerjee rbanerjee@cs.stonybrook.edu c Ritwik Banerjee Information Theory and Communication 1/12 Roadmap Examples
More informationProbability Model for 2 RV s
Probability Model for 2 RV s The joint probability mass function of X and Y is P X,Y (x, y) = P [X = x, Y= y] Joint PMF is a rule that for any x and y, gives the probability that X = x and Y= y. 3 Example:
More informationLecture 17. Lower bound for variable-length source codes with error. Coding a sequence of symbols: Rates and scheme (Arithmetic code)
Lecture 17 Agenda for the lecture Lower bound for variable-length source codes with error Coding a sequence of symbols: Rates and scheme (Arithmetic code) Introduction to universal codes 17.1 variable-length
More informationStreaming algorithms for embedding and computing edit distance
Streaming algorithms for embedding and computing edit distance in the low distance regime Diptarka Chakraborty joint work with Elazar Goldenberg and Michal Koucký 7 November, 2015 Outline 1 Introduction
More informationLETTER Improvement of JPEG Compression Efficiency Using Information Hiding and Image Restoration
IEICE TRANS. INF. & SYST., VOL.E96 D, NO.5 MAY 2013 1233 LETTER Improvement of JPEG Compression Efficiency Using Information Hiding and Image Restoration Kazumi YAMAWAKI, Member, Fumiya NAKANO, Student
More informationAN IMPROVISED LOSSLESS DATA-HIDING MECHANISM FOR IMAGE AUTHENTICATION BASED HISTOGRAM MODIFICATION
AN IMPROVISED LOSSLESS DATA-HIDING MECHANISM FOR IMAGE AUTHENTICATION BASED HISTOGRAM MODIFICATION Shaik Shaheena 1, B. L. Sirisha 2 VR Siddhartha Engineering College, Vijayawada, Krishna, Andhra Pradesh(520007),
More informationLecture 13. Types of error-free codes: Nonsingular, Uniquely-decodable and Prefix-free
Lecture 13 Agenda for the lecture Introduction to data compression Fixed- and variable-length codes Types of error-free codes: Nonsingular, Uniquely-decodable and Prefix-free 13.1 The data compression
More informationChapter 5 VARIABLE-LENGTH CODING Information Theory Results (II)
Chapter 5 VARIABLE-LENGTH CODING ---- Information Theory Results (II) 1 Some Fundamental Results Coding an Information Source Consider an information source, represented by a source alphabet S. S = { s,
More informationKINGS COLLEGE OF ENGINEERING DEPARTMENT OF INFORMATION TECHNOLOGY ACADEMIC YEAR / ODD SEMESTER QUESTION BANK
KINGS COLLEGE OF ENGINEERING DEPARTMENT OF INFORMATION TECHNOLOGY ACADEMIC YEAR 2011-2012 / ODD SEMESTER QUESTION BANK SUB.CODE / NAME YEAR / SEM : IT1301 INFORMATION CODING TECHNIQUES : III / V UNIT -
More informationCompression of a Binary Source with Side Information Using Parallelly Concatenated Convolutional Codes
Compression of a Binary Source with Side Information Using Parallelly Concatenated Convolutional Codes Zhenyu Tu, Jing Li (Tiffany) and Rick S. Blum Electrical and Computer Engineering Department Lehigh
More informationAuxiliary Variational Information Maximization for Dimensionality Reduction
Auxiliary Variational Information Maximization for Dimensionality Reduction Felix Agakov 1 and David Barber 2 1 University of Edinburgh, 5 Forrest Hill, EH1 2QL Edinburgh, UK felixa@inf.ed.ac.uk, www.anc.ed.ac.uk
More informationIEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 53, NO. 10, OCTOBER 2007 3413 Relay Networks With Delays Abbas El Gamal, Fellow, IEEE, Navid Hassanpour, and James Mammen, Student Member, IEEE Abstract The
More informationMath 326A Exercise 4. Due Wednesday, October 24, 2012.
Math 326A Exercise 4. Due Wednesday, October 24, 2012. Implicit Function Theorem: Suppose that F(x, y, z) has continuous partial derivatives, and that the gradient P F is nonzero at a point P. Consider
More informationChapter 15 Vector Calculus
Chapter 15 Vector Calculus 151 Vector Fields 152 Line Integrals 153 Fundamental Theorem and Independence of Path 153 Conservative Fields and Potential Functions 154 Green s Theorem 155 urface Integrals
More informationOptimized Analog Mappings for Distributed Source-Channel Coding
21 Data Compression Conference Optimized Analog Mappings for Distributed Source-Channel Coding Emrah Akyol, Kenneth Rose Dept. of Electrical & Computer Engineering University of California, Santa Barbara,
More informationSPREAD SPECTRUM AUDIO WATERMARKING SCHEME BASED ON PSYCHOACOUSTIC MODEL
SPREAD SPECTRUM WATERMARKING SCHEME BASED ON PSYCHOACOUSTIC MODEL 1 Yüksel Tokur 2 Ergun Erçelebi e-mail: tokur@gantep.edu.tr e-mail: ercelebi@gantep.edu.tr 1 Gaziantep University, MYO, 27310, Gaziantep,
More informationInformation-Theoretical Analysis of Private Content Identification
Information-Theoretical Analysis of Private Content Identification S. Voloshynovskiy, O. Koval, F. Beekhof, F. Farhadzadeh, T. Holotyak Stochastic Information Processing Group, University of Geneva, Switzerland
More informationOUTLINE. Paper Review First Paper The Zero-Error Side Information Problem and Chromatic Numbers, H. S. Witsenhausen Definitions:
OUTLINE Definitions: - Source Code - Expected Length of a source code - Length of a codeword - Variable/fixed length coding - Example: Huffman coding - Lossless coding - Distortion - Worst case length
More informationInformation Cloaking Technique with Tree Based Similarity
Information Cloaking Technique with Tree Based Similarity C.Bharathipriya [1], K.Lakshminarayanan [2] 1 Final Year, Computer Science and Engineering, Mailam Engineering College, 2 Assistant Professor,
More informationComputing over Multiple-Access Channels with Connections to Wireless Network Coding
ISIT 06: Computing over MACS 1 / 20 Computing over Multiple-Access Channels with Connections to Wireless Network Coding Bobak Nazer and Michael Gastpar Wireless Foundations Center Department of Electrical
More informationIMAGE COMPRESSION- I. Week VIII Feb /25/2003 Image Compression-I 1
IMAGE COMPRESSION- I Week VIII Feb 25 02/25/2003 Image Compression-I 1 Reading.. Chapter 8 Sections 8.1, 8.2 8.3 (selected topics) 8.4 (Huffman, run-length, loss-less predictive) 8.5 (lossy predictive,
More informationSource Coding with Distortion through Graph Coloring
Source Coding with Distortion through Graph Coloring Vishal Doshi, Devavrat Shah, and Muriel Médard Laboratory for Information and Decision Systems Massachusetts Institute of Technology Cambridge, MA 02139
More informationConfusion/Diffusion Capabilities of Some Robust Hash Functions
Confusion/Diffusion Capabilities of Some Robust Hash Functions Baris Coskun Department of Electrical and Computer Engineering Polytechnic University Brooklyn, NY 24 Email: baris@isis.poly.edu Nasir Memon
More informationContents. MATH 32B-2 (18W) (L) G. Liu / (TA) A. Zhou Calculus of Several Variables. 1 Homework 1 - Solutions 3. 2 Homework 2 - Solutions 13
MATH 32B-2 (8) (L) G. Liu / (TA) A. Zhou Calculus of Several Variables Contents Homework - Solutions 3 2 Homework 2 - Solutions 3 3 Homework 3 - Solutions 9 MATH 32B-2 (8) (L) G. Liu / (TA) A. Zhou Calculus
More informationMath 52 Final Exam March 16, 2009
Math 52 Final Exam March 16, 2009 Name : Section Leader: Josh Lan Xiannan (Circle one) Genauer Huang Li Section Time: 10:00 11:00 1:15 2:15 (Circle one) This is a closed-book, closed-notes exam. No calculators
More informationBenefits of Coded Placement for Networks with Heterogeneous Cache Sizes
Benefits of Coded Placement for Networks with Heterogeneous Cache Sizes Abdelrahman M. Ibrahim, Ahmed A. Zewail, and Aylin Yener ireless Communications and Networking Laboratory CAN Electrical Engineering
More informationMultilayer Data Embedding Using Reduced Difference Expansion
Multilayer Data Embedding Using Reduced Difference Expansion DINESH SATRE 1, DEVYANI BONDE 2, SUBHASH RATHOD 3 Department Of Computer Engineering Marathwada Mitra Mandal s Institute of Technology Savitribai
More informationData Hiding in Video
Data Hiding in Video J. J. Chae and B. S. Manjunath Department of Electrical and Computer Engineering University of California, Santa Barbara, CA 9316-956 Email: chaejj, manj@iplab.ece.ucsb.edu Abstract
More informationDiscrete Mathematics and Probability Theory Spring 2017 Rao Midterm 2
CS 70 Discrete Mathematics and Probability Theory Spring 2017 Rao Midterm 2 PRINT Your Name:, (last) SIGN Your Name: (first) PRINT Your Student ID: WRITE THE NAME OF your exam room: Name of the person
More informationApplications of Triple Integrals
Chapter 14 Multiple Integrals 1 Double Integrals, Iterated Integrals, Cross-sections 2 Double Integrals over more general regions, Definition, Evaluation of Double Integrals, Properties of Double Integrals
More informationApplications of Integration. Copyright Cengage Learning. All rights reserved.
Applications of Integration Copyright Cengage Learning. All rights reserved. Volume: The Shell Method Copyright Cengage Learning. All rights reserved. Objectives Find the volume of a solid of revolution
More information7.1 INVERSE FUNCTIONS
1 7.1 INVERSE FUNCTIONS One to one functions are important because their equations, f(x) = k, have (at most) a single solution. One to one functions are also important because they are the functions that
More informationImage Acquisition + Histograms
Image Processing - Lesson 1 Image Acquisition + Histograms Image Characteristics Image Acquisition Image Digitization Sampling Quantization Histograms Histogram Equalization What is an Image? An image
More informationDigital Image Processing Laboratory: MAP Image Restoration
Purdue University: Digital Image Processing Laboratories 1 Digital Image Processing Laboratory: MAP Image Restoration October, 015 1 Introduction This laboratory explores the use of maximum a posteriori
More informationAchievable Rate Regions for Network Coding
Achievable Rate Regions for Network oding Randall Dougherty enter for ommunications Research 4320 Westerra ourt San Diego, A 92121-1969 Email: rdough@ccrwest.org hris Freiling Department of Mathematics
More informationInternational Journal of Wavelets, Multiresolution and Information Processing c World Scientific Publishing Company
International Journal of Wavelets, Multiresolution and Information Processing c World Scientific Publishing Company IMAGE MIRRORING AND ROTATION IN THE WAVELET DOMAIN THEJU JACOB Electrical Engineering
More informationCoordinate Transformations in Advanced Calculus
Coordinate Transformations in Advanced Calculus by Sacha Nandlall T.A. for MATH 264, McGill University Email: sacha.nandlall@mail.mcgill.ca Website: http://www.resanova.com/teaching/calculus/ Fall 2006,
More informationAppendix 4. Audio coding algorithms
Appendix 4. Audio coding algorithms 1 Introduction The main application of audio compression systems is to obtain compact digital representations of high-quality (CD-quality) wideband audio signals. Typically
More informationPower Set of a set and Relations
Power Set of a set and Relations 1 Power Set (1) Definition: The power set of a set S, denoted P(S), is the set of all subsets of S. Examples Let A={a,b,c}, P(A)={,{a},{b},{c},{a,b},{b,c},{a,c},{a,b,c}}
More informationLogarithmic Time Prediction
Logarithmic Time Prediction John Langford Microsoft Research DIMACS Workshop on Big Data through the Lens of Sublinear Algorithms The Multiclass Prediction Problem Repeatedly 1 See x 2 Predict ŷ {1,...,
More informationTesting Continuous Distributions. Artur Czumaj. DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science
Testing Continuous Distributions Artur Czumaj DIMAP (Centre for Discrete Maths and it Applications) & Department of Computer Science University of Warwick Joint work with A. Adamaszek & C. Sohler Testing
More informationIntroduction to Visible Watermarking. IPR Course: TA Lecture 2002/12/18 NTU CSIE R105
Introduction to Visible Watermarking IPR Course: TA Lecture 2002/12/18 NTU CSIE R105 Outline Introduction State-of of-the-art Characteristics of Visible Watermarking Schemes Attacking Visible Watermarking
More informationA metric space is a set S with a given distance (or metric) function d(x, y) which satisfies the conditions
1 Distance Reading [SB], Ch. 29.4, p. 811-816 A metric space is a set S with a given distance (or metric) function d(x, y) which satisfies the conditions (a) Positive definiteness d(x, y) 0, d(x, y) =
More informationImage Compression Algorithm and JPEG Standard
International Journal of Scientific and Research Publications, Volume 7, Issue 12, December 2017 150 Image Compression Algorithm and JPEG Standard Suman Kunwar sumn2u@gmail.com Summary. The interest in
More informationLecture 15. Error-free variable length schemes: Shannon-Fano code
Lecture 15 Agenda for the lecture Bounds for L(X) Error-free variable length schemes: Shannon-Fano code 15.1 Optimal length nonsingular code While we do not know L(X), it is easy to specify a nonsingular
More informationImage and Video Coding I: Fundamentals
Image and Video Coding I: Fundamentals Thomas Wiegand Technische Universität Berlin T. Wiegand (TU Berlin) Image and Video Coding Organization Vorlesung: Donnerstag 10:15-11:45 Raum EN-368 Material: http://www.ic.tu-berlin.de/menue/studium_und_lehre/
More informationTopic 5 - Joint distributions and the CLT
Topic 5 - Joint distributions and the CLT Joint distributions Calculation of probabilities, mean and variance Expectations of functions based on joint distributions Central Limit Theorem Sampling distributions
More informationData Compression Techniques
Data Compression Techniques Part 1: Entropy Coding Lecture 1: Introduction and Huffman Coding Juha Kärkkäinen 31.10.2017 1 / 21 Introduction Data compression deals with encoding information in as few bits
More informationPerformance analysis of Integer DCT of different block sizes.
Performance analysis of Integer DCT of different block sizes. Aim: To investigate performance analysis of integer DCT of different block sizes. Abstract: Discrete cosine transform (DCT) has been serving
More informationCHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM
74 CHAPTER 4 REVERSIBLE IMAGE WATERMARKING USING BIT PLANE CODING AND LIFTING WAVELET TRANSFORM Many data embedding methods use procedures that in which the original image is distorted by quite a small
More informationCARTESIAN OVOIDS. A. Cartesian Ovoids in the case n o < n i
CARTESIAN OVOIDS The Cartesian ovoids are plane curves that have been first described by Rene Descartes (in 1637 AD) and represent the geometric locus of points that have the same linear combination of
More informationMath 241, Final Exam. 12/11/12.
Math, Final Exam. //. No notes, calculator, or text. There are points total. Partial credit may be given. ircle or otherwise clearly identify your final answer. Name:. (5 points): Equation of a line. Find
More informationA Reversible Data Hiding Scheme For JPEG Images
A Reversible Data Hiding Scheme For JPEG Images Qiming Li, Yongdong Wu, and Feng Bao Institute for Infocomm Research, A*Star, Singapore {qli,wydong,baofeng}@i2r.a-star.edu.sg Abstract. When JPEG images
More informationHybrid Quasi-Monte Carlo Method for the Simulation of State Space Models
The Tenth International Symposium on Operations Research and Its Applications (ISORA 211) Dunhuang, China, August 28 31, 211 Copyright 211 ORSC & APORC, pp. 83 88 Hybrid Quasi-Monte Carlo Method for the
More informationf (Pijk ) V. may form the Riemann sum: . Definition. The triple integral of f over the rectangular box B is defined to f (x, y, z) dv = lim
Chapter 14 Multiple Integrals..1 Double Integrals, Iterated Integrals, Cross-sections.2 Double Integrals over more general regions, Definition, Evaluation of Double Integrals, Properties of Double Integrals.3
More informationJoint Feature Distributions for Image Correspondence. Joint Feature Distribution Matching. Motivation
Joint Feature Distributions for Image Correspondence We need a correspondence model based on probability, not just geometry! Bill Triggs MOVI, CNRS-INRIA, Grenoble, France http://www.inrialpes.fr/movi/people/triggs
More informationENSC Multimedia Communications Engineering Huffman Coding (1)
ENSC 424 - Multimedia Communications Engineering Huffman Coding () Jie Liang Engineering Science Simon Fraser University JieL@sfu.ca J. Liang: SFU ENSC 424 Outline Entropy Coding Prefix code Kraft-McMillan
More informationReversible Watermarking in JPEG Images Based on Modified RZL Codes and Histogram Shift
203, Vol.8 No.2, 26-32 Article ID 007-202(203)02-026-07 DOI 0.007/s859-03-0904- Reversible Watermarking in JPEG Images Based on Modified RZL Codes and Histogram Shift CHEN Biao, ZHANG Weiming,2, YU Nenghai.
More informationPERFORMANCE ANALYSIS OF INTEGER DCT OF DIFFERENT BLOCK SIZES USED IN H.264, AVS CHINA AND WMV9.
EE 5359: MULTIMEDIA PROCESSING PROJECT PERFORMANCE ANALYSIS OF INTEGER DCT OF DIFFERENT BLOCK SIZES USED IN H.264, AVS CHINA AND WMV9. Guided by Dr. K.R. Rao Presented by: Suvinda Mudigere Srikantaiah
More informationMathematics Masters Examination
Mathematics Masters Examination OPTION 4 March 30, 2004 COMPUTER SCIENCE 2 5 PM NOTE: Any student whose answers require clarification may be required to submit to an oral examination. Each of the fourteen
More informationA Theory of Network Equivalence Part I: Point-to-Point Channels
972 IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 57, NO. 2, FEBRUARY 2011 A Theory of Network Equivalence Part I: Point-to-Point Channels Ralf Koetter, Fellow, IEEE, Michelle Effros, Fellow, IEEE, and
More informationCONNECTIVITY CHECK IN 3-CONNECTED PLANAR GRAPHS WITH OBSTACLES
CONNECTIVITY CHECK IN 3-CONNECTED PLANAR GRAPHS WITH OBSTACLES M. M. KANTÉ 1 B. COURCELLE 1 C. GAVOILLE 1 A. TWIGG 2 1 Université Bordeaux 1, LaBRI, CNRS. 2 Computer Laboratory, Cambridge University. Topological
More informationIBM Research Report. Inter Mode Selection for H.264/AVC Using Time-Efficient Learning-Theoretic Algorithms
RC24748 (W0902-063) February 12, 2009 Electrical Engineering IBM Research Report Inter Mode Selection for H.264/AVC Using Time-Efficient Learning-Theoretic Algorithms Yuri Vatis Institut für Informationsverarbeitung
More informationA Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm
International Journal of Engineering Research and General Science Volume 3, Issue 4, July-August, 15 ISSN 91-2730 A Image Comparative Study using DCT, Fast Fourier, Wavelet Transforms and Huffman Algorithm
More informationApplying the Information Bottleneck Principle to Unsupervised Clustering of Discrete and Continuous Image Representations
Applying the Information Bottleneck Principle to Unsupervised Clustering of Discrete and Continuous Image Representations Shiri Gordon Faculty of Engineering Tel-Aviv University, Israel gordonha@post.tau.ac.il
More informationDigital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay
Digital Communication Prof. Bikash Kumar Dey Department of Electrical Engineering Indian Institute of Technology, Bombay Lecture - 29 Source Coding (Part-4) We have already had 3 classes on source coding
More informationFundamentals of Multimedia. Lecture 5 Lossless Data Compression Variable Length Coding
Fundamentals of Multimedia Lecture 5 Lossless Data Compression Variable Length Coding Mahmoud El-Gayyar elgayyar@ci.suez.edu.eg Mahmoud El-Gayyar / Fundamentals of Multimedia 1 Data Compression Compression
More informationECE521: Week 11, Lecture March 2017: HMM learning/inference. With thanks to Russ Salakhutdinov
ECE521: Week 11, Lecture 20 27 March 2017: HMM learning/inference With thanks to Russ Salakhutdinov Examples of other perspectives Murphy 17.4 End of Russell & Norvig 15.2 (Artificial Intelligence: A Modern
More informationCSE200: Computability and complexity Interactive proofs
CSE200: Computability and complexity Interactive proofs Shachar Lovett January 29, 2018 1 What are interactive proofs Think of a prover trying to convince a verifer that a statement is correct. For example,
More informationDiscrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity
Discrete Optimization 2010 Lecture 5 Min-Cost Flows & Total Unimodularity Marc Uetz University of Twente m.uetz@utwente.nl Lecture 5: sheet 1 / 26 Marc Uetz Discrete Optimization Outline 1 Min-Cost Flows
More informationInternational Journal of Innovative Research in Computer and Communication Engineering
Data Hiding Using Difference Expension Method YASMEEN M.G,RAJALAKSHMI G Final year M.E, Arunai Engineering College, Thiruvannamalai, Tamilnadu, India Assistant Professor, Arunai Engineering College,Thiruvannamalai,
More informationMedical Image Segmentation Based on Mutual Information Maximization
Medical Image Segmentation Based on Mutual Information Maximization J.Rigau, M.Feixas, M.Sbert, A.Bardera, and I.Boada Institut d Informatica i Aplicacions, Universitat de Girona, Spain {jaume.rigau,miquel.feixas,mateu.sbert,anton.bardera,imma.boada}@udg.es
More informationVectors and the Geometry of Space
Vectors and the Geometry of Space In Figure 11.43, consider the line L through the point P(x 1, y 1, z 1 ) and parallel to the vector. The vector v is a direction vector for the line L, and a, b, and c
More informationModule 9 AUDIO CODING. Version 2 ECE IIT, Kharagpur
Module 9 AUDIO CODING Lesson 29 Transform and Filter banks Instructional Objectives At the end of this lesson, the students should be able to: 1. Define the three layers of MPEG-1 audio coding. 2. Define
More information4 Factor graphs and Comparing Graphical Model Types
Massachusetts Institute of Technology Department of Electrical Engineering and Computer Science 6.438 Algorithms for Inference Fall 2014 4 Factor graphs and Comparing Graphical Model Types We now introduce
More informationPlenty of Whoopee Strand New Years
N B V Y-NN \ - 4! / N N ) B > 3 9 N - N 95 q N N B zz 3 - z N Y B - 933 55 2 -- - -» 25-5 & V X X X X N Y B 5 932 / q - 8 4 6 B N 3 BN NY N ; -! 2-- - - 2 B z ; - - B VN N 4) - - - B N N N V 4- - 8 N-
More informationPoint-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS
Point-Set Topology 1. TOPOLOGICAL SPACES AND CONTINUOUS FUNCTIONS Definition 1.1. Let X be a set and T a subset of the power set P(X) of X. Then T is a topology on X if and only if all of the following
More informationCryptanalyzing the Polynomial Reconstruction based Public-Key System under Optimal Parameter Choice
Cryptanalyzing the Polynomial Reconstruction based Public-Key System under Optimal Parameter Choice Aggelos Kiayias - Moti Yung U. of Connecticut - Columbia U. (Public-Key) Cryptography intractability
More informationNetwork Image Coding for Multicast
Network Image Coding for Multicast David Varodayan, David Chen and Bernd Girod Information Systems Laboratory, Stanford University Stanford, California, USA {varodayan, dmchen, bgirod}@stanford.edu Abstract
More informationA tight bound on approximating arbitrary metrics by tree metric Reference :[FRT2004]
A tight bound on approximating arbitrary metrics by tree metric Reference :[FRT2004] Jittat Fakcharoenphol, Satish Rao, Kunal Talwar Journal of Computer & System Sciences, 69 (2004), 485-497 1 A simple
More informationCSC 373: Algorithm Design and Analysis Lecture 4
CSC 373: Algorithm Design and Analysis Lecture 4 Allan Borodin January 14, 2013 1 / 16 Lecture 4: Outline (for this lecture and next lecture) Some concluding comments on optimality of EST Greedy Interval
More informationBilevel Sparse Coding
Adobe Research 345 Park Ave, San Jose, CA Mar 15, 2013 Outline 1 2 The learning model The learning algorithm 3 4 Sparse Modeling Many types of sensory data, e.g., images and audio, are in high-dimensional
More informationCache-Aided Coded Multicast for Correlated Sources
Cache-Aided Coded Multicast for Correlated Sources P. Hassanzadeh A. Tulino J. Llorca E. Erkip arxiv:1609.05831v1 [cs.it] 19 Sep 2016 Abstract The combination of edge caching and coded multicasting is
More information1. Techniques for ill-posed least squares problems. We write throughout this lecture = 2. Consider the following least squares problems:
ILL-POSED LEAST SQUARES PROBLEMS. Techniques for ill-posed least squares problems. We write throughout this lecture = 2. Consider the following least squares problems: where A = min b Ax, min b A ε x,
More informationA graph is finite if its vertex set and edge set are finite. We call a graph with just one vertex trivial and all other graphs nontrivial.
2301-670 Graph theory 1.1 What is a graph? 1 st semester 2550 1 1.1. What is a graph? 1.1.2. Definition. A graph G is a triple (V(G), E(G), ψ G ) consisting of V(G) of vertices, a set E(G), disjoint from
More informationLecture : Topological Space
Example of Lecture : Dr. Department of Mathematics Lovely Professional University Punjab, India October 18, 2014 Outline Example of 1 2 3 Example of 4 5 6 Example of I Topological spaces and continuous
More informationReview Initial Value Problems Euler s Method Summary
THE EULER METHOD P.V. Johnson School of Mathematics Semester 1 2008 OUTLINE 1 REVIEW 2 INITIAL VALUE PROBLEMS The Problem Posing a Problem 3 EULER S METHOD Method Errors 4 SUMMARY OUTLINE 1 REVIEW 2 INITIAL
More informationData Structures. Dynamic Sets
Data Structures Binary Search Tree Dynamic Sets Elements have a key and satellite data Dynamic sets support queries such as: Search(S, k) Minimum(S) Maximum(S) Successor(S, x) Predecessor(S, x) Insert(S,
More informationThe AWGN Red Alert Problem
The AWGN Red Alert Problem Bobak Nazer, Yanina Shkel, and Stark C. Draper ECE Department Boston University ECE Department University of Wisconsin, Madison ITA 2011 February 10, 2011 Nazer, Shkel, Draper
More informationRobust Lossless Data Hiding. Outline
Robust Lossless Data Hiding Yun Q. Shi, Zhicheng Ni, Nirwan Ansari Electrical and Computer Engineering New Jersey Institute of Technology October 2010 1 Outline What is lossless data hiding Existing robust
More information