368 results
Search Results
2. On Polar Coding for Side Information Channels.
- Author
-
Beilin, Barak and Burshtein, David
- Subjects
- *
CHANNEL coding , *SOURCE code , *BINARY codes - Abstract
We propose a successive cancellation list (SCL) encoding and decoding scheme for the Gelfand Pinsker (GP) problem based on the known nested polar coding scheme. It applies SCL encoding for the source coding part, and SCL decoding with a properly defined CRC for the channel coding part. The scheme shows improved performance compared to the existing method. A known issue with nested polar codes for binary dirty paper is the existence of frozen channel code bits that are not frozen in the source code. These bits need to be retransmitted in a second phase of the scheme, thus reducing the rate and increasing the required blocklength. We provide an improved bound on the size of this set, and on its scaling with respect to the blocklength, when the Bhattacharyya parameter of the test channel used for source coding is sufficiently large, or the Bhattacharyya parameter of the channel seen at the decoder is sufficiently small. The result is formulated for an arbitrary binary-input memoryless GP problem, since unlike the previous results, it does not require degradedness of the two channels mentioned above. Finally, we present simulation results for binary dirty paper and noisy write once memory codes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Minimum Energy per Bit in Broadcast and Interference Channels With Correlated Information.
- Author
-
Host-Madsen, Anders
- Subjects
- *
BROADCASTING industry , *PAPER arts , *METHODOLOGY , *STATISTICAL correlation , *INFORMATION theory - Abstract
This paper develops a methodology for finding minimum energy per bit in networks with correlated information, without at need for finding bounds on capacity. This is used to derive the exact minimum energy per bit for some broadcast and interference channels with common and correlated messages, and bounds on the minimum energy per bit for some other channels. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
4. On the Multiple-Access Channel With Common Rate-Limited Feedback.
- Author
-
Shaviv, Dor and Steinberg, Yossef
- Subjects
- *
ENCODING , *PAPER arts , *MATHEMATICS , *MARKOV processes , *GAUSSIAN channels - Abstract
This paper studies the multiple-access channel (MAC) with rate-limited feedback. The channel output is encoded into one stream of bits, which is provided causally to the two users at the channel input. An achievable rate region for this setup is derived, based on superposition of information, block Markov coding, and coding with various degrees of side information for the feedback link. The suggested region coincides with the Cover–Leung inner bound for large feedback rates. The result is then extended for cases where there is only a feedback link to one of the transmitters, and for a more general case where there are two separate feedback links to both transmitters. We compute achievable regions for the Gaussian MAC and for the binary erasure MAC. The Gaussian region is computed for the case of common rate-limited feedback, whereas the region for the binary erasure MAC is computed for one-sided feedback. It is known that for the latter, the Cover–Leung region is tight, and we obtain results that coincide with the feedback capacity region for high feedback rates. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
5. A Unified Framework for One-Shot Achievability via the Poisson Matching Lemma.
- Author
-
Li, Cheuk Ting and Anantharam, Venkat
- Subjects
- *
BROADCAST channels , *SOURCE code , *LOSSY data compression , *CHANNEL coding , *INFORMATION theory , *INFORMATION networks , *PAPER arts - Abstract
We introduce a fundamental lemma called the Poisson matching lemma, and apply it to prove one-shot achievability results for various settings, namely channels with state information at the encoder, lossy source coding with side information at the decoder, joint source-channel coding, broadcast channels, distributed lossy source coding, multiple access channels and channel resolvability. Our one-shot bounds improve upon the best known one-shot bounds in most of the aforementioned settings (except multiple access channels and channel resolvability, where we recover bounds comparable to the best known bounds), with shorter proofs in some settings even when compared to the conventional asymptotic approach using typicality. The Poisson matching lemma replaces both the packing and covering lemmas, greatly simplifying the error analysis. This paper extends the work of Li and El Gamal on Poisson functional representation, which mainly considered variable-length source coding settings, whereas this paper studies fixed-length settings, and is not limited to source coding, showing that the Poisson functional representation is a viable alternative to typicality for most problems in network information theory. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Two Applications of Coset Cardinality Spectrum of Distributed Arithmetic Coding.
- Subjects
- *
BINARY sequences , *INFINITY (Mathematics) - Abstract
Distributed Arithmetic Coding (DAC) is a practical realization of Slepian-Wolf coding that partitions source space into cosets. Coset Cardinality Spectrum (CCS) is an important property of DAC that was defined in our previous work. In this paper, we give two applications of CCS. First, we find that DAC bitstream is not compact. The rate loss of DAC bitstream is caused by two factors: unequal coset partitioning and bit indivisibility. It is proved that as code length goes to infinity, the expected value of bit-indivisibility rate loss will tend to 0.5 for any irrational Rate Change Step (RCS), where the RCS refers to the rate change when one source bit is flipped. With the help of CCS, the bit-indivisibility rate loss can be compensated to some extend. Especially, for any irrational RCS, as code length goes to infinity, the expected value of the remaining bit-indivisibility rate loss after compensation will tend to about 0.47. The second application of CCS is DAC decoder design. We derive the formula of path metric and find that in the original paper on DAC, the intuitive formula of path metric is not correct. The backward-replacing algorithm is proposed to make full use of memory. Experimental results confirm the correctness of theoretical analyses. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
7. Asymptotic Divergences and Strong Dichotomy.
- Author
-
Huang, Xiang, Lutz, Jack H., Mayordomo, Elvira, and Stull, Donald M.
- Subjects
- *
KOLMOGOROV complexity , *DIVERGENCE theorem , *SUFFIXES & prefixes (Grammar) , *GAMBLERS , *RANDOM variables , *PROBABILITY measures - Abstract
The Schnorr-Stimm dichotomy theorem (Schnorr and Stimm, 1972) concerns finite-state gamblers that bet on infinite sequences of symbols taken from a finite alphabet $\Sigma $. The theorem asserts that, for any such sequence $S$ , the following two things are true. (1) If $S$ is not normal in the sense of Borel (meaning that every two strings of equal length appear with equal asymptotic frequency in $S$), then there is a finite-state gambler that wins money at an infinitely-often exponential rate betting on $S$. (2) If $S$ is normal, then any finite-state gambler loses money at an exponential rate betting on $S$. In this paper we use the Kullback-Leibler divergence to formulate the lower asymptotic divergence ${\mathrm {div}}(S||\alpha)$ of a probability measure $\alpha $ on $\Sigma $ from a sequence $S$ over $\Sigma $ and the upper asymptotic divergence ${\mathrm {Div}}(S||\alpha)$ of $\alpha $ from $S$ in such a way that a sequence $S$ is $\alpha $ -normal (meaning that every string $w$ has asymptotic frequency $\alpha (w)$ in $S$) if and only if ${\mathrm {Div}}(S||\alpha)=0$. We also use the Kullback-Leibler divergence to quantify the total risk ${\mathrm {Risk}}_{G}(w)$ that a finite-state gambler $G$ takes when betting along a prefix $w$ of $S$. Our main theorem is a strong dichotomy theorem that uses the above notions to quantify the exponential rates of winning and losing on the two sides of the Schnorr-Stimm dichotomy theorem (with the latter routinely extended from normality to $\alpha $ -normality). Modulo asymptotic caveats in the paper, our strong dichotomy theorem says that the following two things hold for prefixes $w$ of $S$. ($1~'$) The infinitely-often exponential rate of winning in 1 is $2^{{\mathrm {Div}}(S||\alpha)|w|}$. ($2~'$) The exponential rate of loss in 2 is $2^{- {\mathrm {Risk}}_{G}(w)}$. We also use (1 $'$) to show that $1- {\mathrm {Div}}(S||\alpha)/c$ , where $c= \log (1/ \min _{a\in \Sigma }\alpha (a))$ , is an upper bound on the finite-state $\alpha $ -dimension of $S$ and prove the dual fact that $1- {\mathrm {div}}(S||\alpha)/c$ is an upper bound on the finite-state strong $\alpha $ -dimension of $S$. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
8. Robust Scatter Matrix Estimation for High Dimensional Distributions With Heavy Tail.
- Author
-
Lu, Junwei, Han, Fang, and Liu, Han
- Subjects
- *
S-matrix theory , *GOODNESS-of-fit tests - Abstract
This paper studies large scatter matrix estimation for heavy tailed distributions. The contributions of this paper are twofold. First, we propose and advocate to use a new distribution family, the pair-elliptical, for modeling the high dimensional data. The pair-elliptical is more flexible and easier to check the goodness of fit compared to the elliptical. Secondly, built on the pair-elliptical family, we advocate using quantile-based statistics for estimating the scatter matrix. For this, we provide a family of quantile-based statistics. They outperform the existing ones for better balancing the efficiency and robustness. In particular, we show that the propose estimators have comparable performance to the moment-based counterparts under the Gaussian assumption. The method is also tuning-free compared to Catoni’s M-estimator for covariance matrix estimation. We further apply the method to conduct a variety of statistical methods. The corresponding theoretical properties as well as numerical performances are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Unifying the Brascamp-Lieb Inequality and the Entropy Power Inequality.
- Author
-
Anantharam, Venkat, Jog, Varun, and Nair, Chandra
- Subjects
- *
ENTROPY , *DIFFERENTIAL entropy , *DIFFERENTIAL inequalities , *LINEAR matrix inequalities , *TOPOLOGICAL entropy - Abstract
The entropy power inequality (EPI) and the Brascamp-Lieb inequality (BLI) are fundamental inequalities concerning the differential entropies of linear transformations of random vectors. The EPI provides lower bounds for the differential entropy of linear transformations of random vectors with independent components. The BLI, on the other hand, provides upper bounds on the differential entropy of a random vector in terms of the differential entropies of some of its linear transformations. In this paper, we define a family of entropy functionals, which we show are subadditive. We then establish that Gaussians are extremal for these functionals by adapting a proof technique from Geng and Nair (2014). As a consequence, we obtain a new entropy inequality that generalizes both the BLI and EPI. By considering a variety of independence relations among the components of the random vectors appearing in these functionals, we also obtain families of inequalities that lie between the EPI and the BLI. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
10. The Eigenvectors of Single-Spiked Complex Wishart Matrices: Finite and Asymptotic Analyses.
- Author
-
Dharmawansa, Prathapasinghe, Dissanayake, Pasan, and Chen, Yang
- Subjects
- *
WISHART matrices , *COMPLEX matrices , *PROBABILITY density function , *RANDOM variables , *EIGENVALUES - Abstract
Let $\mathrm {W}\in \mathbb {C}^{n\times n}$ be a single-spiked Wishart matrix in the class $\mathrm {W}\sim \mathcal {CW}_{n}(m,\mathrm {I}_{n}+ \theta \mathrm {v}\mathrm {v}^{\dagger}) $ with $m\geq n$ , where ${\mathrm {I}}_{n}$ is the $n\times n$ identity matrix, $\mathrm {v}\in \mathbb {C}^{n\times 1}$ is an arbitrary vector with unit Euclidean norm, $\theta \geq 0$ is a non-random parameter, and $(\cdot)^{\dagger} $ represents the conjugate-transpose operator. Let u1 and ${\mathrm {u}}_{n}$ denote the eigenvectors corresponding to the smallest and the largest eigenvalues of W, respectively. This paper investigates the probability density function (p.d.f.) of the random quantity $Z_{\ell }^{(n)}=\left |{\mathrm {v}^{\dagger} \mathrm {u}_\ell }\right |^{2}\in (0,1)$ for $\ell =1,n$. In particular, we derive a finite dimensional closed-form p.d.f. for $Z_{1}^{(n)}$ which is amenable to asymptotic analysis as $m,n$ diverges with $m-n$ fixed. It turns out that, in this asymptotic regime, the scaled random variable $nZ_{1}^{(n)}$ converges in distribution to $\chi ^{2}_{2}/2(1+\theta)$ , where $\chi _{2}^{2}$ denotes a chi-squared random variable with two degrees of freedom. This reveals that u1 can be used to infer information about the spike. On the other hand, the finite dimensional p.d.f. of $Z_{n}^{(n)}$ is expressed as a double integral in which the integrand contains a determinant of a square matrix of dimension $(n-2)$. Although a simple solution to this double integral seems intractable, for special configurations of $n=2,3$ , and 4, we obtain closed-form expressions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
11. On the Information-Theoretic Security of Combinatorial All-or-Nothing Transforms.
- Author
-
Gu, Yujie, Akao, Sonata, Esfahani, Navid Nasr, Miao, Ying, and Sakurai, Kouichi
- Subjects
- *
INFORMATION-theoretic security , *DISTRIBUTION (Probability theory) , *INFORMATION technology security , *RANDOM variables - Abstract
All-or-nothing transforms (AONTs) were proposed by Rivest as a message preprocessing technique for encrypting data to protect against brute-force attacks, and have numerous applications in cryptography and information security. Later the unconditionally secure AONTs and their combinatorial characterization were introduced by Stinson. Informally, a combinatorial AONT is an array with the unbiased requirements and its security properties in general depend on the prior probability distribution on the inputs $s$ -tuples. Recently, it was shown by Esfahani and Stinson that a combinatorial AONT has perfect security provided that all the inputs $s$ -tuples are equiprobable, and has weak security provided that all the inputs $s$ -tuples are with non-zero probability. This paper aims to explore on the gap between perfect security and weak security for combinatorial $(t,s,v)$ -AONTs. Concretely, we consider the typical scenario that all the $s$ inputs take values independently (but not necessarily identically) and quantify the amount of information $H(\mathcal {X}|\mathcal {Y})$ about any $t$ inputs $\mathcal {X}$ that is not revealed by any $s-t$ outputs $\mathcal {Y}$. In particular, we establish the general lower and upper bounds on $H(\mathcal {X}|\mathcal {Y})$ for combinatorial AONTs using information-theoretic techniques, and also show that the derived bounds can be attained in certain cases. Furthermore, the discussions are extended for the security properties of combinatorial asymmetric AONTs. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
12. Distribution of the Minimum Distance of Random Linear Codes.
- Author
-
Hao, Jing, Huang, Han, Livshyts, Galyna V., and Tikhomirov, Konstantin
- Subjects
- *
LINEAR codes , *HAMMING distance , *CUMULATIVE distribution function , *DISTRIBUTION (Probability theory) , *RANDOM variables - Abstract
Let $q\geq 2$ be a prime power. In this paper, we study the distribution of the minimum distance (in the Hamming metric) of a random linear code of dimension $k$ in $\mathbb {F}_{q}^{n}$. We provide quantitative estimates showing that the distribution function of the minimum distance is close (superpolynomially in $n$) to the cumulative distribution function of the minimum of $(q^{k}-1)/(q-1)$ independent binomial random variables with parameters $\frac {1}{q}$ and $n$. The latter, in turn, converges to a Gumbel distribution at integer points when $\frac {k}{n}$ converges to a fixed number in (0, 1). Our result confirms in a strong sense that apart from identification of the weights of proportional codewords, the probabilistic dependencies introduced by the linear structure of the random code, produce a negligible effect on the minimum code weight. As a corollary of the main result, we obtain an improvement of the Gilbert–Varshamov bound for $2< q< 49$. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. Quadratic Privacy-Signaling Games and the MMSE Information Bottleneck Problem for Gaussian Sources.
- Author
-
Kazikli, Ertan, Gezici, Sinan, and Yuksel, Serdar
- Subjects
- *
NASH equilibrium , *MINI-Mental State Examination , *GAMES , *PERSUASION (Psychology) , *PRIVACY - Abstract
We investigate a privacy-signaling game problem in which a sender with privacy concerns observes a pair of correlated random vectors which are modeled as jointly Gaussian. The sender aims to hide one of these random vectors and convey the other one whereas the objective of the receiver is to accurately estimate both of the random vectors. We analyze these conflicting objectives in a game theoretic framework with quadratic costs where depending on the commitment conditions (of the sender), we consider Nash or Stackelberg (Bayesian persuasion) equilibria. We show that a payoff dominant Nash equilibrium among all admissible policies is attained by a set of explicitly characterized linear policies. We also show that a payoff dominant Nash equilibrium coincides with a Stackelberg equilibrium. We formulate the information bottleneck problem within our Stackelberg framework under the mean squared error distortion criterion where the information bottleneck setup has a further restriction that only one of the random variables is observed at the sender. We show that this MMSE Gaussian Information Bottleneck Problem admits a linear solution which is explicitly characterized in the paper. We provide explicit conditions on when the optimal solutions, or equilibrium solutions in the Nash setup, are informative or noninformative. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
14. Compressibility Measures for Affinely Singular Random Vectors.
- Author
-
Charusaie, Mohammad-Amin, Amini, Arash, and Rini, Stefano
- Subjects
- *
RANDOM measures , *COMPRESSIBILITY , *DIFFERENTIAL entropy , *RANDOM variables , *SIGNAL quantization , *CONTINUOUS distributions , *DATA compression - Abstract
The notion of compressibility of a random measure is a rather general concept which find applications in many contexts from data compression, to signal quantization, and parameter estimation. While compressibility for discrete and continuous measures is generally well understood, the case of discrete-continuous measures is quite subtle. In this paper, we focus on a class of multi-dimensional random measures that have singularities on affine lower-dimensional subsets. We refer to this class of random variables as affinely singular. Affinely singular random vectors naturally arises when considering linear transformation of component-wise independent discrete-continuous random variables. To measure the compressibility of such distributions, we introduce the new notion of dimensional-rate bias (DRB) which is closely related to the entropy and differential entropy in discrete and continuous cases, respectively. Similar to entropy and differential entropy, DRB is useful in evaluating the mutual information between distributions of the aforementioned type. Besides the DRB, we also evaluate the the RID of these distributions. We further provide an upper-bound for the RID of multi-dimensional random measures that are obtained by Lipschitz functions of component-wise independent discrete-continuous random variables (X). The upper-bound is shown to be achievable when the Lipschitz function is $A \mathrm {X}$ , where $A$ satisfies ${\mathrm{ SPARK}}({A_{m\times n}}) = m+1$ (e.g., Vandermonde matrices). When considering discrete-domain moving-average processes with non-Gaussian excitation noise, the above results allow us to evaluate the block-average RID and DRB, as well as to determine a relationship between these parameters and other existing compressibility measures. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
15. Entropic Compressibility of Lévy Processes.
- Author
-
Fageot, Julien, Fallah, Alireza, and Horel, Thibaut
- Subjects
- *
LEVY processes , *POISSON processes , *SELF-similar processes , *CENTRAL limit theorem , *WIENER processes - Abstract
In contrast to their seemingly simple and shared structure of independence and stationarity, Lévy processes exhibit a wide variety of behaviors, from the self-similar Wiener process to piecewise-constant compound Poisson processes. Inspired by the recent paper of Ghourchian, Amini, and Gohari, we characterize their compressibility by studying the entropy of their double discretization (both in time and amplitude) in the regime of vanishing discretization steps. For a Lévy process with absolutely continuous marginals, this reduces to understanding the asymptotics of the differential entropy of its marginals at small times, for which we obtain a new local central limit theorem. We generalize known results for stable processes to the non-stable case, with a special focus on Lévy processes that are locally self-similar, and conceptualize a new compressibility hierarchy of Lévy processes, captured by their Blumenthal–Getoor index. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
16. Settling the Sharp Reconstruction Thresholds of Random Graph Matching.
- Author
-
Wu, Yihong, Xu, Jiaming, and Yu, Sophie H.
- Subjects
- *
RANDOM graphs , *COMPLETE graphs , *MAXIMUM likelihood statistics , *PHASE transitions - Abstract
This paper studies the problem of recovering the hidden vertex correspondence between two edge-correlated random graphs. We focus on the Gaussian model where the two graphs are complete graphs with correlated Gaussian weights and the Erdős-Rényi model where the two graphs are subsampled from a common parent Erdős-Rényi graph ${\mathcal {G}}(n,p)$. For dense Erdős-Rényi graphs with $p=n^{-o(1)}$ , we prove that there exists a sharp threshold, above which one can correctly match all but a vanishing fraction of vertices and below which correctly matching any positive fraction is impossible, a phenomenon known as the “all-or-nothing” phase transition. Even more strikingly, in the Gaussian setting, above the threshold all vertices can be exactly matched with high probability. In contrast, for sparse Erdős-Rényi graphs with $p=n^{-\Theta (1)}$ , we show that the all-or-nothing phenomenon no longer holds and we determine the thresholds up to a constant factor. Along the way, we also derive the sharp threshold for exact recovery, sharpening the existing results in Erdős-Rényi graphs. The proof of the negative results builds upon a tight characterization of the mutual information based on the truncated second-moment computation and an “area theorem” that relates the mutual information to the integral of the reconstruction error. The positive results follows from a tight analysis of the maximum likelihood estimator that takes into account the cycle structure of the induced permutation on the edges. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Source Resolvability and Intrinsic Randomness: Two Random Number Generation Problems With Respect to a Subclass of f-Divergences.
- Author
-
Nomura, Ryo
- Subjects
- *
RANDOM numbers , *INFORMATION theory , *CONVEX functions , *GENERATIONS , *RANDOM variables - Abstract
This paper deals with two typical random number generation problems in information theory. One is the source resolvability problem (resolvability problem for short) and the other is the intrinsic randomness problem. In the literature, optimum achievable rates in these two problems with respect to the variational distance as well as the Kullback-Leibler (KL) divergence have already been analyzed. On the other hand, in this study we consider these two problems with respect to $f$ -divergences. The $f$ -divergence is a general non-negative measure between two probabilistic distributions on the basis of a convex function $f$. The class of $f$ -divergences includes several important measures such as the variational distance, the KL divergence, the Hellinger distance and so on. Hence, it is meaningful to consider the random number generation problems with respect to $f$ -divergences. In this paper, we impose some conditions on the function $f$ so as to simplify the analysis, that is, we consider a subclass of $f$ -divergences. Then, we first derive general formulas of the first-order optimum achievable rates with respect to $f$ -divergences. Next, we particularize our general formulas to several specified functions $f$. As a result, we reveal that it is easy to derive optimum achievable rates for several important measures from our general formulas. The second-order optimum achievable rates and optimistic optimum achievable rates have also been investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
18. A Bound on Undirected Multiple-Unicast Network Information Flow.
- Author
-
Qureshi, Mohammad Ishtiyaq and Thakor, Satyajit
- Subjects
- *
INFORMATION networks , *LINEAR network coding , *INFORMATION theory , *LINEAR programming , *STATISTICAL decision making , *LOGICAL prediction - Abstract
One of the important unsolved problems in information theory is the conjecture that the undirected multiple-unicast network information capacity is the same as the routing capacity. This conjecture is verified only for a handful of networks and network classes. Moreover, only two explicit upper bounds on information capacity are known for general undirected networks: the sparsest cut bound and the linear programming bound. In this paper, we present an information-theoretic upper bound, called the partition bound, on the capacity of general undirected multiple-unicast networks. We show that a decision version problem of computing the bound is NP-complete. We present two classes of undirected multiple-unicast networks such that the partition bound is achievable by routing. Thus, the conjecture is proved for these classes of networks. Recently, the conjecture was proved for a new class of networks defined by properties relating to cut-set and source-sink paths. We show the existence of a network outside of this new class of networks such that the partition bound is achievable by routing. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. Recoverable Systems.
- Author
-
Elishco, Ohad and Barg, Alexander
- Subjects
- *
RANDOM variables , *DE Bruijn graph , *RATE setting , *ENTROPY - Abstract
Motivated by the established notion of storage codes, we consider sets of infinite sequences over a finite alphabet such that every $k$ -tuple of consecutive entries is uniquely recoverable from its $l$ -neighborhood in the sequence. We address the problem of finding the maximum growth rate of the set, which we term capacity, as well as constructions of explicit families that approach the optimal rate. The techniques that we employ rely on the connection of this problem with constrained systems. In the second part of the paper we consider a modification of the problem wherein the entries in the sequence are viewed as random variables over a finite alphabet that follow some joint distribution, and the recovery condition requires that the Shannon entropy of the $k$ -tuple conditioned on its $l$ -neighborhood be bounded above by some $\epsilon >0$. We study properties of measures on infinite sequences that maximize the metric entropy under the recoverability condition. Drawing on tools from ergodic theory, we prove some properties of entropy-maximizing measures. We also suggest a procedure of constructing an $\epsilon $ -recoverable measure from a corresponding deterministic system. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
20. On the Capacity of MISO Optical Intensity Channels With Per-Antenna Intensity Constraints.
- Author
-
Chen, Ru-Han, Li, Longguang, Zhang, Jian, Zhang, Wenyi, and Zhou, Jing
- Subjects
- *
MISO , *RANDOM variables , *OPTICAL transmitters , *OPTICAL receivers , *GAUSSIAN channels , *RANDOM noise theory , *SAMPLING theorem - Abstract
This paper investigates the capacity of general multiple-input single-output (MISO) optical intensity channels (OICs) under per-antenna peak- and average-intensity constraints. We first consider the MISO equal-cost constrained OIC (EC-OIC), where, apart from the peak-intensity constraint, average intensities of inputs are equal to arbitrarily preassigned constants. The second model of our interest is the MISO bounded-cost constrained OIC (BC-OIC), where, as compared with the EC-OIC, average intensities of inputs are no larger than arbitrarily preassigned constants. By leveraging tools from quantile functions, stop-loss transform and convex ordering of nonnegative random variables, we prove two decomposition theorems for bounded and nonnegative random variables, based on which we equivalently transform both the EC-OIC and the BC-OIC into respective single-input single-output channels under a peak-intensity and several stop-loss mean constraints. Capacity lower and upper bounds for both channels are established, based on which the asymptotic capacity at high and low signal-to-noise-ratio are determined. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Variations on a Theme by Massey.
- Subjects
- *
RENYI'S entropy , *UNPUBLISHED materials , *DISTRIBUTION (Probability theory) , *ENTROPY , *DIFFERENTIAL entropy , *RANDOM variables - Abstract
In 1994, Jim Massey proposed the guessing entropy as a measure of the difficulty that an attacker has to guess a secret used in a cryptographic system, and established a well-known inequality between entropy and guessing entropy. Over 15 years before, in an unpublished work, he also established a well-known inequality for the entropy of an integer-valued random variable of given variance. In this paper, we establish a link between the two works by Massey in the more general framework of the relationship between discrete (absolute) entropy and continuous (differential) entropy. Two approaches are given in which the discrete entropy (or Rényi entropy) of an integer-valued variable can be upper bounded using the differential (Rényi) entropy of some suitably chosen continuous random variable. As an application, lower bounds on guessing entropy and guessing moments are derived in terms of entropy or Rényi entropy (without side information) and conditional entropy or Arimoto conditional entropy (when side information is available). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Bias for the Trace of the Resolvent and Its Application on Non-Gaussian and Non-Centered MIMO Channels.
- Author
-
Zhang, Xin and Song, Shenghui
- Subjects
- *
RANDOM matrices , *GAUSSIAN distribution , *RANDOM variables , *RECEIVING antennas , *ELECTROMAGNETIC interference , *GAUSSIAN channels , *RESOLVENTS (Mathematics) - Abstract
The mutual information (MI) of Gaussian multi-input multi-output (MIMO) channels has been evaluated by utilizing random matrix theory (RMT) and shown to asymptotically follow Gaussian distribution, where the ergodic mutual information (EMI) converges to a deterministic quantity. However, with non-Gaussian channels, there is a bias between the EMI and its deterministic equivalent (DE), whose evaluation is not available in the literature. This bias of the EMI is related to the bias for the trace of the resolvent in large RMT. In this paper, we first derive the bias for the trace of the resolvent, which is further extended to compute the bias for the linear spectral statistics (LSS). Then, we apply the above results on non-Gaussian MIMO channels to determine the bias for the EMI. It is also proved that the bias for the EMI is −0.5 times of that for the variance of the MI. Finally, the derived bias is utilized to modify the central limit theory (CLT) and calculate the outage probability. Numerical results show that the modified CLT significantly outperforms previous methods in approximating the distribution of the MI and improves the accuracy for the outage probability evaluation. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
23. Estimation in Poisson Noise: Properties of the Conditional Mean Estimator.
- Author
-
Dytso, Alex and Vincent Poor, H.
- Subjects
- *
CONDITIONAL expectations , *RANDOM variables , *NOISE , *PROBABILITY density function , *ESTIMATION theory - Abstract
This paper considers estimation of a random variable in Poisson noise with signal scaling coefficient and dark current as explicit parameters of the noise model. Specifically, the paper focuses on properties of the conditional mean estimator as a function of the scaling coefficient, the dark current parameter, the distribution of the input random variable and channel realizations. With respect to the scaling coefficient and the dark current, several identities in terms of derivatives are established. For example, it is shown that the gradient of the conditional mean estimator with respect to the scaling coefficient and dark current parameter is proportional to the conditional variance. Moreover, a score function is proposed and a Tweedie-like formula for the conditional expectation is recovered. With respect to the distribution, several regularity conditions are shown. For instance, it is shown that the conditional mean estimator uniquely determines the input distribution. Moreover, it is shown that if the conditional expectation is close to a linear function in terms of mean squared error, then the input distribution is approximately gamma in the Lévy distance. Furthermore, sufficient and necessary conditions for linearity are found. Interestingly, it is shown that the conditional mean estimator cannot be linear when the dark current parameter of the Poisson noise is non-zero. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
24. Bits Through Queues With Feedback.
- Author
-
Aptel, Laure and Tchamkerten, Aslan
- Subjects
- *
RANDOM variables - Abstract
In their seminal 1996 paper, Anantharam and Verdú showed that feedback does not increase the capacity of a queue under First-in-First-Out service policy and exponentially distributed service time. Since the channel has memory, this negative result raises the question whether it extends to other non-trivial combinations of service policy and service time. This paper addresses this question by providing two sufficient conditions under which feedback either increases capacity or does not increase capacity. The first is a sufficient condition on the service time distribution for feedback to increase capacity under First-In-First-Out service policy. The second is a sufficient condition for feedback not to increase capacity and is general in that it depends on the output distribution of the queue, but explicitly depends neither on the queue policy nor on the service time distribution. This condition is satisfied, for instance, by queues with Last-Come-First-Serve service policy and bounded service times. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
25. Novel Outer Bounds and Capacity Results for the Interference Channel With Conferencing Receivers.
- Author
-
Farsani, Reza K. and Khandani, Amir K.
- Subjects
- *
GAUSSIAN channels , *RANDOM variables , *INTEGRATING circuits - Abstract
In this paper, capacity bounds for the two-user interference channels with cooperative receivers via conferencing links of finite capacities are investigated. Capacity results known for these communication scenarios are limited to a very few special cases of the one-sided channels. One of the challenges in obtaining capacity limits of such cooperative networks is how to establish efficient capacity outer bounds for them. In this paper, by applying new techniques, novel capacity outer bounds are established for the interference channels with conferencing receivers. Using the outer bounds, several new capacity results are proved for interesting channels with unidirectional cooperation in strong and mixed interference regimes. A conferencing link (between receivers) may be utilized to provide a receiver with “information about its desired signal” or “information about its interfering signal”. It is demonstrated that both strategies can be helpful to achieve capacity. Finally, for the case of Gaussian interference channel with conferencing receivers, it is argued that our outer bound is strictly tighter than the previous one derived by Wang and Tse. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
26. The Capacity Achieving Distribution for the Amplitude Constrained Additive Gaussian Channel: An Upper Bound on the Number of Mass Points.
- Author
-
Dytso, Alex, Yagli, Semih, Poor, H. Vincent, and Shamai Shitz, Shlomo
- Subjects
- *
GAUSSIAN channels , *PROBABILITY density function , *RANDOM noise theory , *CHANNEL coding , *RANDOM variables - Abstract
This paper studies an $n$ -dimensional additive Gaussian noise channel with a peak-power-constrained input. It is well known that, in this case, when $n=1$ the capacity-achieving input distribution is discrete with finitely many mass points, and when $n>1$ the capacity-achieving input distribution is supported on finitely many concentric shells. However, due to the previous proof technique, not even a bound on the exact number of mass points/shells was available. This paper provides an alternative proof of the finiteness of the number mass points/shells of the capacity-achieving input distribution while producing the first firm bounds on the number of mass points and shells, paving an alternative way for approaching many such problems. The first main result of this paper is an order tight implicit bound which shows that the number of mass points in the capacity-achieving input distribution is within a factor of two from the number of zeros of the downward shifted capacity-achieving output probability density function. Next, this implicit bound is utilized to provide a first firm upper on the support size of optimal input distribution, an $O(\mathsf {A}^{2})$ upper bound where $\mathsf {A}$ denotes the constraint on the input amplitude. The second main result of this paper generalizes the first one to the case when $n>1$ , showing that, for each and every dimension $n\ge 1$ , the number of shells that the optimal input distribution contains is $O(\mathsf {A}^{2})$. Finally, the third main result of this paper reconsiders the case $n=1$ with an additional average power constraint, demonstrating a similar $O(\mathsf {A}^{2})$ bound. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
27. Signaling Games for Log-Concave Distributions: Number of Bins and Properties of Equilibria.
- Author
-
Kazikli, Ertan, Saritas, Serkan, Gezici, Sinan, Linder, Tamas, and Yuksel, Serdar
- Subjects
- *
EQUILIBRIUM , *REAL variables , *RANDOM variables , *DISTRIBUTION (Probability theory) , *CHANNEL coding - Abstract
We investigate the equilibrium behavior for the decentralized cheap talk problem for real random variables and quadratic cost criteria in which an encoder and a decoder have misaligned objective functions. In prior work, it has been shown that the number of bins in any equilibrium has to be countable, generalizing a classical result due to Crawford and Sobel who considered sources with density supported on [0, 1]. In this paper, we first refine this result in the context of log-concave sources. For sources with two-sided unbounded support, we prove that, for any finite number of bins, there exists a unique equilibrium. In contrast, for sources with semi-unbounded support, there may be a finite upper bound on the number of bins in equilibrium depending on certain conditions stated explicitly. Moreover, we prove that for log-concave sources, the expected costs of the encoder and the decoder in equilibrium decrease as the number of bins increases. Furthermore, for strictly log-concave sources with two-sided unbounded support, we prove convergence to the unique equilibrium under best response dynamics which starts with a given number of bins, making a connection with the classical theory of optimal quantization and convergence results of Lloyd’s method. In addition, we consider more general sources which satisfy certain assumptions on the tail(s) of the distribution and we show that there exist equilibria with infinitely many bins for sources with two-sided unbounded support. Further explicit characterizations are provided for sources with exponential, Gaussian, and compactly-supported probability distributions. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
28. Bayesian Risk With Bregman Loss: A Cramér–Rao Type Bound and Linear Estimation.
- Author
-
Dytso, Alex, FauB, Michael, and Poor, H. Vincent
- Subjects
- *
MEAN square algorithms , *PROBABILITY density function - Abstract
A general class of Bayesian lower bounds when the underlying loss function is a Bregman divergence is demonstrated. This class can be considered as an extension of the Weinstein–Weiss family of bounds for the mean squared error and relies on finding a variational characterization of Bayesian risk. This approach allows for the derivation of a version of the Cramér–Rao bound that is specific to a given Bregman divergence. This new generalization of the Cramér–Rao bound reduces to the classical one when the loss function is taken to be the Euclidean norm. In order to evaluate the effectiveness of the new lower bounds, the paper also develops upper bounds on Bayesian risk, which are based on optimal linear estimators. The effectiveness of the new bound is evaluated in the Poisson noise setting. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Distributed Compression of Graphical Data.
- Author
-
Delgosha, Payam and Anantharam, Venkat
- Subjects
- *
SPARSE graphs , *DATA warehousing , *TIME series analysis , *DATA compression , *SOCIAL networks - Abstract
In contrast to time series, graphical data is data indexed by the vertices and edges of a graph. Modern applications such as the internet, social networks, genomics and proteomics generate graphical data, often at large scale. The large scale argues for the need to compress such data for storage and subsequent processing. Since this data might have several components available in different locations, it is also important to study distributed compression of graphical data. In this paper, we derive a rate region for this problem which is a counterpart of the Slepian–Wolf theorem. We characterize the rate region when the statistical description of the distributed graphical data can be modeled as being one of two types – as a member of a sequence of marked sparse Erdős–Rényi ensembles or as a member of a sequence of marked configuration model ensembles. Our results are in terms of a generalization of the notion of entropy introduced by Bordenave and Caputo in the study of local weak limits of sparse graphs. Furthermore, we give a generalization of this result for Erdős–Rényi and configuration model ensembles with more than two sources. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. The Gray-Wyner Network and Wyner’s Common Information for Gaussian Sources.
- Author
-
Sula, Erixhen and Gastpar, Michael
- Subjects
- *
INFORMATION resources , *INFORMATION theory , *COVARIANCE matrices , *RANDOM variables , *MULTICASTING (Computer networks) - Abstract
This paper presents explicit solutions for two related non-convex information extremization problems due to Gray and Wyner in the Gaussian case. The first problem is the Gray-Wyner network subject to a sum-rate constraint on the two private links. Here, our argument establishes the optimality of Gaussian codebooks and hence, a closed-form formula for the optimal rate region. The second problem is Wyner’s common information and a generalization thereof, where conditional independence is generalized to a limit on the conditional mutual information. We present full explicit solutions for the scalar as well as the vector case. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
31. Outer Bounds for Multiuser Settings: The Auxiliary Receiver Approach.
- Author
-
Gohari, Amin and Nair, Chandra
- Subjects
- *
BROADCAST channels , *GAUSSIAN channels , *MARKOV processes - Abstract
This paper employs auxiliary receivers as a mathematical tool to identify Gallager-type auxiliary random variables and write outer bounds for some basic multiuser settings. This approach is then applied to the relay, interference, and broadcast channel settings, yielding new outer bounds that improve on existing outer bounds and strictly outperform classical outer bounds. For instance, we strictly improve on: the cutset outer bound for the scalar Gaussian relay channel, the outer bounds for the Gaussian Z-interference channel, and the outer bounds for the two receiver broadcast channel. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
32. On Sampling Continuous-Time AWGN Channels.
- Author
-
Han, Guangyue and Shamai, Shlomo
- Subjects
- *
ADDITIVE white Gaussian noise channels , *PSYCHOLOGICAL feedback , *ADDITIVE white Gaussian noise , *GAUSSIAN channels , *SAMPLING theorem - Abstract
For a continuous-time additive white Gaussian noise (AWGN) channel with possible feedback, it has been shown that as sampling gets infinitesimally fine, the mutual information of the associative discrete-time channels converges to that of the original continuous-time channel. We give in this paper more quantitative strengthenings of this result, which, among other implications, characterize how over-sampling approaches the true mutual information of a continuous-time Gaussian channel with bandwidth limit. The assumptions in our results are relatively mild. In particular, for the non-feedback case, compared to the Shannon-Nyquist sampling theorem, a widely used tool to connect continuous-time Gaussian channels to their discrete-time counterparts that requires the band-limitedness of the channel input, our results only require some integrability conditions on the power spectral density function of the input. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. Generalized Submodular Information Measures: Theoretical Properties, Examples, Optimization Algorithms, and Applications.
- Author
-
Iyer, Rishabh, Khargonkar, Ninad, Bilmes, Jeff, and Asnani, Himanshu
- Subjects
- *
INFORMATION measurement , *SUBMODULAR functions , *MATHEMATICAL optimization , *RANDOM variables , *MACHINE learning , *RANDOM sets - Abstract
Information-theoretic quantities like entropy and mutual information have found numerous uses in machine learning. It is well known that there is a strong connection between these entropic quantities and submodularity since entropy over a set of random variables is submodular. In this paper, we study combinatorial information measures that generalize independence, (conditional) entropy, (conditional) mutual information, and total correlation defined over sets of (not necessarily random) variables. These measures strictly generalize the corresponding entropic measures since they are all parameterized via submodular functions that themselves strictly generalize entropy. Critically, we show that, unlike entropic mutual information in general, the submodular mutual information is actually submodular in one argument, holding the other fixed, for a large class of submodular functions whose third-order partial derivatives satisfy a non-negativity property. This turns out to include a number of practically useful cases such as the facility location and set-cover functions. We study specific instantiations of the submodular information measures on these, as well as the probabilistic coverage, graph-cut, log-determinants, and saturated coverage functions and see that they all have mathematically intuitive and practically useful expressions. Finally, we also study generalized independence between subsets of datapoints (random variables in the entropic case), and connect the independence characterizations to independence in log-submodular distributions. Regarding applications, we connect the maximization of submodular (conditional) mutual information to problems such as mutual-information-based, query-based, and privacy preserving summarization—and we connect optimizing the multi-set submodular mutual information to clustering and robust partitioning. We perform real world as well as synthetic experiments on various data summarization tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
34. On the Loss of Single-Letter Characterization: The Dirty Multiple Access Channel.
- Author
-
Philosof, Tal and Zamir, Ram
- Subjects
- *
MULTIPLE access protocols (Computer network protocols) , *SCALAR field theory , *GAUSSIAN processes , *MATHEMATICAL models , *ENTROPY (Information theory) , *RANDOM variables , *MARKOV processes - Abstract
For general memoryless systems, the existing information-theoretic solutions have a "single-letter" form. This reflects the fact that optimum performance can be approached by a random code (or a random binning scheme), generated using independent and identically distributed copies of some scalar distribution. Is that the form of the solution of any (information-theoretic) problem? In fact, some counter examples are known. The most famous one is the "two help one" problem: Körner and Marton showed that if we want to decode the modulo-two sum of two correlated binary sources from their independent encodings, then linear coding is better than random coding. In this paper we provide another counter example, the "doubly-dirty" multiple-access channel (MAC). Like the Körner-Marton problem, this is a multiterminal scenario where side information is distributed among several terminals; each transmitter knows part of the channel interference while the receiver only observes the channel output. We give an explicit solution for the capacity region of the binary doubly-dirty MAC, demonstrate how this region can be approached using a linear coding scheme, and prove that the "best known single-letter region" is strictly contained in it. We also state a conjecture regarding the capacity loss of single-letter characterization in the Gaussian case. [ABSTRACT FROM AUTHOR]
- Published
- 2009
- Full Text
- View/download PDF
35. Information-Theoretic Secret Sharing From Correlated Gaussian Random Variables and Public Communication.
- Author
-
Rana, Vidhi, Chou, Remi A., and Kwon, Hyuck M.
- Subjects
- *
PUBLIC communication , *INFORMATION-theoretic security , *SHARING , *RANDOM variables , *GAUSSIAN channels - Abstract
In this paper, we study an information-theoretic secret sharing problem, where a dealer distributes shares of a secret among a set of participants under the following constraints: (i) authorized sets of users can recover the secret by pooling their shares, and (ii) non-authorized sets of colluding users cannot learn any information about the secret. We assume that the dealer and participants observe the realizations of correlated Gaussian random variables and that the dealer can communicate with participants through a one-way, authenticated, rate-limited, and public channel. Unlike traditional secret sharing protocols, in our setting, no perfectly secure channel is needed between the dealer and the participants. Our main result is a closed-form characterization of the fundamental trade-off between secret rate and public communication rate. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
36. A Graph Symmetrization Bound on Channel Information Leakage Under Blowfish Privacy.
- Author
-
Edwards, Tobias, Rubinstein, Benjamin I. P., Zhang, Zuhe, and Zhou, Sanming
- Subjects
- *
DATA privacy , *PUFFERS (Fish) , *AUTOMORPHISM groups , *PRIVACY , *LEAKAGE , *CHANNEL coding - Abstract
Blowfish privacy is a recent generalisation of differential privacy that enables improved utility while maintaining privacy policies with semantic guarantees, a factor that has driven the popularity of differential privacy in computer science. This paper relates Blowfish privacy to an important measure of privacy loss of information channels from the communications theory community: min-entropy leakage. Symmetry in an input data neighbouring relation is central to known connections between differential privacy and min-entropy leakage. But while differential privacy exhibits strong symmetry, Blowfish neighbouring relations correspond to arbitrary simple graphs owing to the framework’s flexible privacy policies. To bound the min-entropy leakage of Blowfish-private mechanisms we organise our analysis over symmetrical partitions corresponding to orbits of graph automorphism groups. A construction meeting our bound with asymptotic equality demonstrates tightness. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
37. Covariance Decomposition as a Universal Limit on Correlations in Networks.
- Author
-
Beigi, Salman and Renou, Marc-Olivier
- Subjects
- *
MATRIX decomposition , *COVARIANCE matrices , *RANDOM variables , *QUANTUM entanglement , *BIPARTITE graphs - Abstract
Parties connected to independent sources through a network can generate correlations among themselves. Notably, the space of feasible correlations for a given network, depends on the physical nature of the sources and the measurements performed by the parties. In particular, quantum sources give access to nonlocal correlations that cannot be generated classically. In this paper, we derive a universal limit on correlations in networks in terms of their covariance matrix. We show that in a network satisfying a certain condition, the covariance matrix of any feasible correlation can be decomposed as a summation of positive semidefinite matrices each of whose terms corresponds to a source in the network. Our result is universal in the sense that it holds in any physical theory of correlation in networks, including the classical, quantum and all generalized probabilistic theories. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
38. Guesswork With Quantum Side Information.
- Author
-
Hanson, Eric P., Katariya, Vishal, Datta, Nilanjana, and Wilde, Mark M.
- Subjects
- *
QUANTUM measurement , *QUANTUM cryptography , *TASK analysis , *ENTROPY (Information theory) - Abstract
What is the minimum number of guesses needed on average to guess a realization of a random variable correctly? The answer to this question led to the introduction of a quantity called guesswork by Massey in 1994, which can be viewed as an alternate security criterion to entropy. In this paper, we consider the guesswork in the presence of quantum side information, and show that a general sequential guessing strategy is equivalent to performing a single quantum measurement and choosing a guessing strategy based on the outcome. We use this result to deduce entropic one-shot and asymptotic bounds on the guesswork in the presence of quantum side information, and to formulate a semi-definite program (SDP) to calculate the quantity. We evaluate the guesswork for a simple example involving the BB84 states, both numerically and analytically, and we prove a continuity result that certifies the security of slightly imperfect key states when the guesswork is used as the security criterion. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. An Upgrading Algorithm With Optimal Power Law.
- Author
-
Ordentlich, Or and Tal, Ido
- Subjects
- *
NUMERICAL analysis , *APPROXIMATION algorithms , *RANDOM variables , *MARKOV processes , *DISTRIBUTION (Probability theory) - Abstract
Consider a channel $W$ along with a given input distribution $P_{X}$. In certain settings, such as in the construction of polar codes, the output alphabet of $W$ is ‘too large’, and hence we replace $W$ by a channel $Q$ having a smaller output alphabet. We say that $Q$ is upgraded with respect to $W$ if $W$ is obtained from $Q$ by processing its output. In this case, the mutual information $I(P_{X},W)$ between the input and output of $W$ is upper-bounded by the mutual information $I(P_{X},Q)$ between the input and output of $Q$. In this paper, we present an algorithm that produces an upgraded channel $Q$ from $W$ , as a function of $P_{X}$ and the required output alphabet size of $Q$ , denoted $L$. We show that the difference in mutual informations is ‘small’. Namely, it is $O(L^{-2/(| \mathcal {X}|-1)})$ , where $| \mathcal {X}|$ is the size of the input alphabet. This power law of $L$ is optimal. We complement our analysis with numerical experiments which show that the developed algorithm improves upon the existing state-of-the-art algorithms also in non-asymptotic setups. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Optimal Rates of Teaching and Learning Under Uncertainty.
- Author
-
Ling, Yan Hao and Scarlett, Jonathan
- Subjects
- *
ERROR probability , *CHANNEL coding , *ERROR rates , *ELECTRONIC data processing , *NOISE measurement , *TEACHING models - Abstract
In this paper, we consider a recently-proposed model of teaching and learning under uncertainty, in which a teacher receives independent observations of a single bit corrupted by binary symmetric noise, and sequentially transmits to a student through another binary symmetric channel based on the bits observed so far. After a given number $n$ of transmissions, the student outputs an estimate of the unknown bit, and we are interested in the exponential decay rate of the error probability as $n$ increases. We propose a novel block-structured teaching strategy in which the teacher encodes the number of 1s received in each block, and show that the resulting error exponent is the binary relative entropy $D\left({\frac {1}{2}\|\max (p,q)}\right)$ , where $p$ and $q$ are the noise parameters. This matches a trivial converse result based on the data processing inequality, and settles two conjectures of [Jog and Loh, 2021] and [Huleihel et al., 2019]. In addition, we show that the computation time required by the teacher and student is linear in $n$. We also study a more general setting in which the binary symmetric channels are replaced by general binary-input discrete memoryless channels. We provide an achievability bound and a converse bound, and show that the two coincide in certain cases, including (i) when the two channels are identical, and (ii) when the student-teacher channel is a binary symmetric channel. More generally, we give sufficient conditions under which our learning rate is the best possible for block-structured protocols. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
41. Conditional Independence Structures Over Four Discrete Random Variables Revisited: Conditional Ingleton Inequalities.
- Subjects
- *
ENTROPY (Information theory) , *RANDOM variables - Abstract
The paper deals with linear information inequalities valid for entropy functions induced by discrete random variables. Specifically, the so-called conditional Ingleton inequalities are in the center of interest: these are valid under conditional independence assumptions on the inducing random variables. We discuss five inequalities of this particular type, four of which has appeared earlier in the literature. Besides the proof of the new fifth inequality, simpler proofs of (some of) former inequalities are presented. These five information inequalities are used to characterize all conditional independence structures induced by four discrete random variables. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
42. Sketching Semidefinite Programs for Faster Clustering.
- Author
-
Mixon, Dustin G. and Xie, Kaiying
- Subjects
- *
SEMIDEFINITE programming , *STOCHASTIC processes , *TASK analysis , *CONVEX functions , *RANDOM variables - Abstract
Many clustering problems can be solved using semidefinite programming. Theoretical results in this vein frequently consider data with a planted clustering and a notion of signal strength such that the semidefinite program exactly recovers the planted clustering when the signal strength is sufficiently large. In practice, semidefinite programs are notoriously slow, and so speedups are welcome. In this paper, we show how to sketch a popular semidefinite relaxation of a graph clustering problem known as minimum bisection, and our analysis supports a meta-claim that the clustering task is less computationally burdensome when there is more signal. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
43. Reverse Euclidean and Gaussian Isoperimetric Inequalities for Parallel Sets With Applications.
- Subjects
- *
ISOPERIMETRIC inequalities , *MACHINE learning , *INFORMATION theory , *SURFACE area , *COMPUTATIONAL complexity , *EUCLIDEAN algorithm - Abstract
The $r$ -parallel set of a set $A \subseteq {\mathbb {R}} ^{d}$ is the set of all points whose distance from $A$ is less than $r$. In this paper, we show that the surface area of an $r$ -parallel set in ${\mathbb {R}}^{d}$ with volume at most $V$ is upper-bounded by $e^{\Theta (d)}V/r$ , whereas its Gaussian surface area is upper-bounded by $\max (e^{\Theta (d)}, e^{\Theta (d)}/r)$. We also derive a reverse form of the Brunn-Minkowski inequality for $r$ -parallel sets, and as an aside a reverse entropy power inequality for Gaussian-smoothed random variables. We apply our results to two problems in theoretical machine learning: (1) bounding the computational complexity of learning $r$ -parallel sets under a Gaussian distribution; and (2) bounding the sample complexity of estimating robust risk, which is a notion of risk in the adversarial machine learning literature that is analogous to the Bayes risk in hypothesis testing. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
44. Secret Key Generation From Vector Gaussian Sources With Public and Private Communications.
- Author
-
Xu, Yinfei and Cao, Daming
- Subjects
- *
PUBLIC communication , *GAUSSIAN channels , *BROADCAST channels , *INFORMATION-theoretic security , *RANDOM variables - Abstract
In this paper, we consider the problem of secret key generation with one-way communication through both a rate-limited public channel and a rate-limited secure channels where the public channel is from Alice to Bob and Eve and the secure channel is from Alice to Bob. In this model, we do not pose any constraints on the sources, i.e. Bob is not degraded to or less noisy than Eve. We obtain the optimal secret key rate in this problem, both for the discrete memoryless sources and vector Gaussian sources. The vector Gaussian characterization is derived by suitably applying the enhancement argument, and proving a new extremal inequality. The extremal inequality can be seen as coupling of two extremal inequalities, which are related to the degraded compound MIMO Gaussian broadcast channel, and the vector generalization of Costa’s entropy power inequality, accordingly. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
45. One-Shot Variable-Length Secret Key Agreement Approaching Mutual Information.
- Author
-
Li, Cheuk Ting and Anantharam, Venkat
- Subjects
- *
RANDOM variables , *TASK analysis , *INTERNET of things - Abstract
This paper studies an information-theoretic one-shot variable-length secret key agreement problem with public discussion. Let X and Y be jointly distributed random variables, each taking values in some measurable space. Alice and Bob observe X and Y respectively, can communicate interactively through a public noiseless channel, and want to agree on a key length and a key that is approximately uniformly distributed over all bit sequences with the agreed key length. The public discussion is observed by an eavesdropper, Eve. The key should be approximately independent of the public discussion, conditional on the key length. We show that the optimal expected key length is close to the mutual information I(X;Y) within a logarithmic gap. Moreover, an upper bound and a lower bound on the optimal expected key length can be written down in terms of I(X;Y) only. This means that the optimal one-shot performance is always within a small gap of the optimal asymptotic performance regardless of the distribution of the pair (X,Y). This one-shot result may find applications in situations where the components of an i.i.d. pair source (Xn,Yn) are observed sequentially and the key is output bit by bit, or in situations where the random source is not an i.i.d. or ergodic process. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
46. Generalized Compression Strategy for the Downlink Cloud Radio Access Network.
- Author
-
Patil, Pratik and Yu, Wei
- Subjects
- *
RADIO access networks , *VIDEO coding , *IMAGE compression , *CELL phone systems - Abstract
This paper studies the downlink of a cloud radio access network (C-RAN) in which a centralized processor (CP) communicates with mobile users through base stations (BSs) that are connected to the CP via finite-capacity fronthaul links. Information theoretically, the downlink of a C-RAN is modeled as a two-hop broadcast-relay network. Among the various transmission and relaying strategies for such model, this paper focuses on the compression strategy, in which the CP centrally encodes the signals to be broadcast jointly by the BSs, then compresses and sends these signals to the BSs through the fronthaul links. We characterize an achievable rate region for a generalized compression strategy with Marton’s multicoding for broadcasting and multivariate compression for fronthaul transmission. We then compare this rate region with the distributed decode-forward (DDF) scheme, which achieves the capacity of the general relay networks to within a constant gap, and show that the difference lies in that DDF performs Marton’s multicoding and multivariate compression jointly as opposed to successively as in the compression strategy. A main result of this paper is that under the assumption that the fronthaul links are subject to a sum capacity constraint, this difference is immaterial; so, for the Gaussian network, the compression strategy based on successive encoding can already achieve the capacity region of the C-RAN to within a constant gap, where the gap is independent of the channel parameters and the power constraints at the BSs. As a further result, for C-RAN under individual fronthaul constraints, this paper also establishes that the compression strategy can achieve to within a constant gap to the sum capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
47. High Dimensional Inference With Random Maximum A-Posteriori Perturbations.
- Author
-
Hazan, Tamir, Orabona, Francesco, Sarwate, Anand D., Maji, Subhransu, and Jaakkola, Tommi S.
- Subjects
- *
MONTE Carlo method , *P-value (Statistics) , *BOLTZMANN factor , *RANDOM variables , *MATHEMATICAL statistics , *MARKOV chain Monte Carlo , *EXTREME value theory - Abstract
This paper presents a new approach, called perturb-max, for high-dimensional statistical inference in graphical models that is based on applying random perturbations followed by optimization. This framework injects randomness into maximum a-posteriori (MAP) predictors by randomly perturbing the potential function for the input. A classic result from extreme value statistics asserts that perturb-max operations generate unbiased samples from the Gibbs distribution using high-dimensional perturbations. Unfortunately, the computational cost of generating so many high-dimensional random variables can be prohibitive. However, when the perturbations are of low dimension, sampling the perturb-max prediction is as efficient as MAP optimization. This paper shows that the expected value of perturb-max inference with low dimensional perturbations can be used sequentially to generate unbiased samples from the Gibbs distribution. Furthermore the expected value of the maximal perturbations is a natural bound on the entropy of such perturb-max models. A measure concentration result for perturb-max values shows that the deviation of their sampled average from its expectation decays exponentially in the number of samples, allowing effective approximation of the expectation. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. Secret-Key Generation in Many-to-One Networks: An Integrated Game-Theoretic and Information-Theoretic Approach.
- Author
-
Chou, Remi A. and Yener, Aylin
- Subjects
- *
RANDOM variables , *GENERATIONS , *PUBLIC communication , *GAME theory , *SELF-interest - Abstract
This paper considers secret-key generation between several agents and a base station that observe independent and identically distributed realizations of correlated random variables. Each agent wishes to generate the longest possible individual key with the base station by means of public communication. All keys must be jointly kept secret from all external entities. In this many-to-one secret-key generation setting, it can be shown that the agents can take advantage of a collective protocol to increase the sum rate of their generated keys. However, when each agent is only interested in maximizing its own secret-key rate, agents may be unwilling to participate in a collective protocol. Furthermore, when such a collective protocol is employed, how to fairly allocate individual key rates arises as a valid issue. This paper studies the tension between cooperation and self-interest with a game-theoretic treatment. This paper establishes that cooperation is in the best interest of all individualistic agents and that there exist individual secret-key rate allocations that incentivize the agents to follow the protocol. In addition, an explicit coding scheme that achieves such allocations is proposed. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. Message Transmission Over Classical Quantum Channels With a Jammer With Side Information: Message Transmission Capacity and Resources.
- Author
-
Boche, Holger, Cai, Minglai, and Cai, Ning
- Subjects
- *
QUANTUM information theory , *RADAR interference , *HILBERT space , *QUANTUM communication , *ENCODING , *RANDOM variables - Abstract
In this paper, a new model for arbitrarily varying classical-quantum channels is proposed. In this model, a jammer has side information. The communication scenario in which a jammer can select only classical inputs as a jamming sequence is considered in the first part of the paper. This situation corresponds to the standard model of arbitrarily varying classical-quantum channels. Two scenarios are considered. In the first scenario, the jammer knows the channel input, while in the second scenario the jammer knows both the channel input and the message. The transmitter and receiver share a secret random key with a vanishing key rate. The capacity for both average and maximum error criteria for both scenarios is determined in this paper. A strong converse is also proved. It is shown that all these corresponding capacities are equal, which means that additionally revealing the message to the jammer does not change the capacity. The communication scenario with a fully quantum jammer is considered in the second part of the paper. A single letter characterization for the capacity with secret random key as assistance for both average and maximum error criteria is derived in the paper. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. MIMO Gaussian Broadcast Channels With Common, Private, and Confidential Messages.
- Author
-
Goldfeld, Ziv and Permuter, Haim H.
- Subjects
- *
MIMO systems , *GAUSSIAN channels , *CODING theory , *WIRELESS communications - Abstract
The two-user multiple-input multiple-output Gaussian broadcast channel with common, private, and confidential messages is considered. The transmitter sends a common message to both users, a confidential message to the User 1 and a private (non-confidential) message to the User 2. The secrecy-capacity region is characterized by showing that certain inner and outer bounds coincide and that the boundary points are achieved by Gaussian inputs, which enables the development of a tight converse. The proof relies on the factorization of upper concave envelopes and a variant of dirty-paper coding (DPC). It is shown that the entire region is exhausted by using DPC to cancel out the signal of the non-confidential message at Receiver 1, thus making DPC against the signal of the confidential message unnecessary. A numerical example illustrates the secrecy-capacity results. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.