1,037 results
Search Results
102. A New Construction of Block Codes From Algebraic Curves.
- Author
-
Jin, Lingfei
- Subjects
BLOCK codes ,ERROR-correcting codes ,ALGEBRAIC curves ,ALGEBRAIC varieties ,CODING theory - Abstract
Since discovery of Goppa geometric codes, people have been asking the question: are there different constructions of block codes from algebraic curves that give the same parameters as Goppa geometric codes. Despite of great effort by researchers, no such constructions have been found so far. Although in literature, there are many constructions of block code from algebraic curves, most of them are quite different from the one by Goppa in nature and thus they have different parameters. Some of these constructions have the same parameters as Goppa geometric codes, however it was proved that these are essentially the same codes defined by Goppa. In this paper, we solve this question for the case where the characteristic of the ground field is 2, namely, we present a different construction of block codes from algebraic curves that give the same parameters as Goppa geometric codes for the characteristic 2 case. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
103. On the Penalty of Optimal Fix-Free Codes.
- Author
-
Zahabi, Sayed Jalal and Khosravifard, Mohammadali
- Subjects
OPTIMAL control theory ,COMPUTATIONAL complexity ,POCKET computers ,VECTORS (Calculus) ,REDUNDANCY in engineering - Abstract
In this paper, the difference between the redundancy of the optimal asymmetric/symmetric fix-free code, and that of the optimal prefix-free code is considered as the penalty of benefiting from the desired properties of fix-free codes. This penalty is studied from different perspectives. In particular, it is shown that the average penalty of asymmetric fix-free codes is less than 0.21 bit per symbol. Moreover, it is proved that when the source alphabet size is sufficiently large, for almost all sources, the penalty is less than or equal to 0.182 bit per symbol. Regarding symmetric fix-free codes, it is shown that the average penalty tends to infinity as the source alphabet size increases. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
104. On the Two-User Interference Channel With Lack of Knowledge of the Interference Codebook at One Receiver.
- Author
-
Dytso, Alex, Tuninetti, Daniela, and Devroye, Natasha
- Subjects
INTERFERENCE channels (Telecommunications) ,INFORMATION theory ,RECEIVING antennas ,DECODERS & decoding ,SIGNAL-to-noise ratio ,RANDOM noise theory - Abstract
In multiuser information theory, it is often assumed that every node in the network possesses all codebooks used in the network. This assumption may be impractical in distributed ad hoc, cognitive, or heterogeneous networks. This paper considers the two-user interference channel with one oblivious receiver (IC-OR), i.e., one receiver lacks knowledge of the interfering cookbook, whereas the other receiver knows both codebooks. This paper asks whether, and if so how much, the channel capacity of the IC-OR is reduced compared with that of the classical IC where both receivers know all codebooks. A novel outer bound is derived and shown to be achievable to within a gap for the class of injective semideterministic IC-ORs; the gap is shown to be zero for injective fully deterministic IC-ORs. An exact capacity result is shown for the general memoryless IC-OR when the nonoblivious receiver experiences very strong interference. For the linear deterministic IC-OR that models the Gaussian noise channel at high SNR, nonindependent identically distributed. Bernoulli(1/2) input bits are shown to achieve points not achievable by i.i.d. Bernoulli(1/2) input bits used in the same achievability scheme. For the real-valued Gaussian IC-OR, the gap is shown to be at most 1/2 bit per channel use, even though the set of optimal input distributions for the derived outer bound could not be determined. Toward understanding the Gaussian IC-OR, an achievability strategy is evaluated in which the input alphabets at the nonoblivious transmitter are a mixture of discrete and Gaussian random variables, where the cardinality of the discrete part is appropriately chosen as a function of the channel parameters. Surprisingly, as the oblivious receiver intuitively should not be able to jointly decode the intended and interfering messages (whose codebook is unavailable), it is shown that with this choice of input, the capacity region of the symmetric Gaussian IC-OR is to within 1/2\log \left (12\pi \mathrm e \right )\approx 3.34 bits (per channel use per user) of an outer bound for the classical Gaussian IC with full codebook knowledge at both receivers. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
105. Deception With Side Information in Biometric Authentication Systems.
- Author
-
Kang, Wei, Cao, Daming, and Liu, Nan
- Subjects
BIOMETRIC identification ,PROBABILITY theory ,DECODING algorithms ,HUMAN fingerprints ,NUCLEOTIDE sequencing ,RATE distortion theory ,EQUIPMENT & supplies - Abstract
In this paper, we study the probability of successful deception of an uncompressed biometric authentication system with side information at the adversary. It represents the scenario where the adversary may have correlated side information, e.g., a partial finger print or a DNA sequence of a relative of the legitimate user. We find the optimal exponent of the deception probability by proving both the achievability and the converse. Our proofs are based on a connection between the problem of deception with side information and the rate distortion problem with side information at both the encoder and the decoder. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
106. Duality Codes and the Integrality Gap Bound for Index Coding.
- Author
-
Yu, Hao and Neely, Michael J.
- Subjects
CODING theory ,MATHEMATICAL bounds ,DATA packeting ,LINEAR programming ,ALGORITHMS - Abstract
This paper considers a base station that delivers packets to multiple receivers through a sequence of coded transmissions. All receivers overhear the same transmissions. Each receiver may already have some of the packets as side information, and requests another subset of the packets. This problem is known as the index coding problem and can be represented by a bipartite digraph. An integer linear program is developed that provides a lower bound on the minimum number of transmissions required for any coding algorithm. Conversely, its linear programming relaxation is shown to provide an upper bound that is achievable by a simple form of vector linear coding. Thus, the information theoretic optimum is bounded by the integrality gap between the integer program and its linear relaxation. In the special case, when the digraph has a planar structure, the integrality gap is shown to be zero, so that exact optimality is achieved. Finally, for nonplanar problems, an enhanced integer program is constructed that provides a smaller integrality gap. The dual of this problem corresponds to a more sophisticated partial clique coding strategy that time-shares between maximum distance separable codes. This paper illuminates the relationship between index coding, duality, and integrality gaps between integer programs and their linear relaxations. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
107. On the Minimum/Stopping Distance of Array Low-Density Parity-Check Codes.
- Author
-
Rosnes, Eirik, Ambroze, Marcel Adrian, and Tomlinson, Martin
- Subjects
LOW density parity check codes ,HAMMING weight ,PROBABILISTIC automata ,INFORMATION storage & retrieval systems -- Code words ,ERROR probability ,INTEGERS - Abstract
In this paper, we study the minimum/stopping distance of array low-density parity-check (LDPC) codes. An array LDPC code is a quasi-cyclic LDPC code specified by two integers \(q\) and \(m\) , where \(q\) is an odd prime and \(m \leq q\) . In the literature, the minimum/stopping distance of these codes (denoted by \(d(q,m)\) and \(h(q,m)\) , respectively) has been thoroughly studied for \(m \leq 5\) . Both exact results, for small values of \(q\) and \(m\) , and general (i.e., independent of \(q\) ) bounds have been established. For \(m=6\) , the best known minimum distance upper bound, derived by Mittelholzer, is \(d(q,6) \leq 32\) . In this paper, we derive an improved upper bound of \(d(q,6) \leq 20\) and a new upper bound \(d(q,7) \leq 24\) by using the concept of a template support matrix of a codeword/stopping set. The bounds are tight with high probability in the sense that we have not been able to find codewords of strictly lower weight for several values of \(q\) using a minimum distance probabilistic algorithm. Finally, we provide new specific minimum/stopping distance results for \(m \leq 7\) and low-to-moderate values of \(q \leq 79\) . [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
108. Network Coding Capacity Regions via Entropy Functions.
- Author
-
Chan, Terence H. and Grant, Alex
- Subjects
ROUTING (Computer network management) ,CHANNEL capacity (Telecommunications) ,CODING theory ,ENTROPY (Information theory) ,LINEAR network coding ,ERROR probability - Abstract
In this paper, we use entropy functions to characterize the set of rate-capacity tuples achievable with either zero decoding error, or vanishing decoding error, for general network coding problems for acyclic networks. We show that when sources are colocated, the outer bound is tight and the sets of zero-error achievable and vanishing-error achievable rate-capacity tuples are the same. Then, we extend this paper to networks subject to linear encoding constraints, routing constraints (where some or all nodes can only perform routing), and secrecy constraints. Finally, we show that even for apparently simple networks, design of optimal codes may be difficult. In particular, we prove that for the incremental multicast problem and for the single-source secure network coding problem, characterization of the achievable set can be very hard and linear network codes may not be optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
109. Some Tight Lower Bounds on the Redundancy of Optimal Binary Prefix-Free and Fix-Free Codes.
- Author
-
Javad-Kalbasi, Mohammad and Khosravifard, Mohammadali
- Subjects
REDUNDANCY in engineering ,CIPHERS ,TWO-dimensional bar codes - Abstract
The tight lower bound on the redundancy of optimal prefix-free codes in terms of a given symbol probability (not necessarily the largest or the smallest one) has been derived in the literature. The first goal of this paper is to derive the tight lower bounds in terms of $j$ ($j>1$) known symbol probabilities of the source using some properties of Kullback-Leibler distance. Since fix-free codes are special prefix-free codes (for which no codeword is a suffix of the other codewords), it is clear that all lower bounds on the redundancy of optimal prefix-free codes are also valid for fix-free ones. Accordingly, the second question of the paper is on the tightness of the derived lower bounds for optimal fix-free codes. It is proven that these lower bounds are tight for optimal fix-free codes if $j\leq 3$ , and are not tight for $j\geq 4$. Also, it is shown that the tight lower bound in terms of the probability of the most likely symbol is the same for optimal prefix-free and optimal fix-free codes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
110. K-Plex 2-Erasure Codes and Blackburn Partial Latin Squares.
- Author
-
Stones, Rebecca J.
- Subjects
MAGIC squares ,CIPHERS - Abstract
A k-plex of order n is an ${n} \times {n}$ matrix on n symbols, where every row contains k distinct symbols, every column contains k distinct symbols, and every symbol occurs exactly k times. Yi et al. (2019) introduced 3-plex codes which are 2-erasure codes (2-erasure tolerant array codes) derived from 3-plexes. In this paper, we generalize 3-plex codes to k-plex codes. We introduce the notion of a “strong” k-plex which implies the derived k-plex code is 2-erasure tolerant. Moreover, k-plex codes derived from strong $k$ -plexes have a straightforward algorithm for reconstruction. These general k-plex codes offer greater flexibility when choosing a suitable code for a storage system, enabling the operator to better optimize the unavoidable trade-offs involved. Blackburn asked for the maximum number of entries in an ${n} \times {n}$ partial Latin square on n symbols in which if distinct cells $({i},{j})$ and $({i}',{j}')$ contain the same symbol, then the cells $({i}',{j})$ and $({i},{j}')$ are empty. A “strong” k-plex satisfies the Blackburn property (along with two other properties related to erasure coding). We investigate the necessary conditions for the existence of Blackburn k-plexes (and hence necessary conditions for the existence of strong k-plexes). We show that any Blackburn k-plex has order ${n} \geq \lceil (\sqrt {2}+1){k}-2 \rceil $. We describe how to construct strong k-plexes of order n when $k \in \{2,3,4,5\}$ for all possible orders n, and we give a simple construction of strong k-plexes of order ${k}^{2}$ for $k \geq 2$. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
111. Cycle Structures of a Class of Cascaded FSRs.
- Author
-
Chang, Zuling, Gong, Guang, and Wang, Qiang
- Subjects
SHIFT registers ,LINEAR equations ,PSYCHOLOGICAL feedback ,LINEAR systems ,FINITE fields ,BOOLEAN functions - Abstract
In this paper, we study a class of binary nonlinear feedback shift register sequences generated by cascaded feedback registers, one is an LFSR and the other one generates a de Bruijn sequence. The cycle structure (in particular, the initial state of each cycle) is determined by solving a system of linear equations. As an application, we can generate de Bruijn sequences of large period algorithmically. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
112. Quaternary Hermitian Linear Complementary Dual Codes.
- Author
-
Araya, Makoto, Harada, Masaaki, and Saito, Ken
- Subjects
BINARY codes ,LIQUID crystal displays ,CIPHERS - Abstract
The largest minimum weights among quaternary Hermitian linear complementary dual codes are known for dimension 2. In this paper, we give some conditions on the nonexistence of quaternary Hermitian linear complementary dual codes with large minimum weights. As a consequence, we completely determine the largest minimum weights for dimension 3, by using a classification of some quaternary codes. In addition, for a positive integer s, an entanglement-assisted quantum error-correcting $[[21{s}+5,3,16{s}+3;21{s}+2]]$ code with maximal entanglement is constructed for the first time from a quaternary Hermitian linear complementary dual [26, 3, 19] code. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
113. Tight Information Theoretic Converse Results for Some Pliable Index Coding Problems.
- Author
-
Liu, Tang and Tuninetti, Daniela
- Subjects
LINEAR codes ,DESIGN techniques ,LINEAR network coding - Abstract
This paper studies the Pliable Index CODing problem (PICOD), which models content-type distribution networks. In the PICOD $({t})$ problem there are ${m}$ messages, ${n}$ users and each user has a distinct message side information set, as in the classical Index Coding problem (IC). Differently from IC, where each user has a pre-specified set of messages to decode, in the PICOD $({t})$ a user is “pliable” and is satisfied if it can decode any ${t}$ messages that are not in its side information set. The goal is to find a code with the shortest length that satisfies all the users. This flexibility in determining the desired message sets makes the PICOD $({t})$ behave quite differently compared to the IC, and its analysis even more challenging. This paper mainly focuses on the complete– ${S}$ PICOD $({t})$ with ${m}$ messages, where the set ${S}\subset [{m}]$ contains the sizes of the side information sets, and the number of users is ${n}=\sum _{s\in {S}}\binom {m} {s}$ , with no two users having the same side information set. Capacity results are shown for: (i) the consecutive complete– ${S}$ PICOD $({t})$ , where ${S}=[{s}_{\text {min}}:{s}_{\text {max}}]$ for some $0 \leqslant {s}_{\text {min}}\leqslant {s}_{\text {max}} \leqslant {m}-{t}$ , and (ii) the complement-consecutive complete– ${S}$ PICOD $({t})$ , where ${S}=[0: {m}-{t}]\backslash [{s}_{\text {min}}:{s}_{\text {max}}]$ , for some $0 < {s}_{\text {min}}\leqslant {s}_{\text {max}} < {m}-{t}$. The novel converse proof is inspired by combinatorial design techniques and the key insight is to consider all messages that a user can eventually decode successfully, even those in excess of the ${t}$ required ones. This allows one to circumvent the need to consider all possible desired message set assignments at the users in order to find the one that leads to the shortest code length. The core of the novel proof is to solve the critical complete– ${S}$ PICOD $({t})$ with ${m} = 2{s}+{t}$ messages and ${S}=\{{s}\}$ , by showing the existence of a user who can decode ${s}+{t}$ messages regardless of the desired message set assignment. All other tight converse results for the complete– ${S}$ PICOD $({t})$ can be deduced from this critical case. The converse results show the information theoretic optimality of simple linear coding schemes. By similar reasoning, all complete– ${S}$ PICOD $({t})$ where the number of messages is ${m}\leqslant 5$ can be fully characterized. In addition, tight converse results are also shown for the PICOD(1) with circular-arc network topology hypergraph. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
114. Scalar MSCR Codes via the Product Matrix Construction.
- Author
-
Zhang, Yaqian and Zhang, Zhifang
- Subjects
MATRIX multiplications ,PRODUCT coding ,BUILDING design & construction ,CONSTRUCTION ,SYMMETRIC matrices - Abstract
An $(n,k,d)$ cooperative regenerating code provides the optimal-bandwidth repair for any $t~(t\!>\!1)$ node failures in a cooperative way. In particular, an MSCR (minimum storage cooperative regenerating) code retains the same storage overhead as an $(n,k)$ MDS code. Suppose each node stores $\alpha $ symbols which indicates the sub-packetization level of the code. A scalar MSCR code attains the minimum sub-packetization, i.e., $\alpha =d-k+t$. By now, all existing constructions of scalar MSCR codes restrict to very special parameters, eg. $d=k$ or $k=2$ , etc. In a recent work, Ye and Barg construct MSCR codes for all $n,k,d,t$ , however, their construction needs $\alpha \approx \text {exp}(n^{t})$ which is almost infeasible in practice. In this paper, we give an explicit construction of scalar MSCR codes for all $d\geq \max \{2k-1-t,k\}$ , which covers all possible parameters except the case of $k\leq d\leq 2k-2-t$ when $k< 2k-1-t$. Moreover, as a complementary result, for $k< d< 2k-2-t$ we prove the nonexistence of linear scalar MSCR codes that have invariant repair spaces. Our construction and most of the previous scalar MSCR codes all have invariant repair spaces and this property is appealing in practice because of convenient repair. In this sense, this work presents an almost full description of usual scalar MSCR codes. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
115. Some Nonprimitive BCH Codes and Related Quantum Codes.
- Author
-
Liu, Yang, Li, Ruihu, Guo, Guanmin, and Wang, Junli
- Subjects
CIPHERS ,LINEAR codes ,TELECOMMUNICATION systems ,ERROR correction (Information theory) - Abstract
Let $q$ be a prime power and $m\geq 3$ be odd. Suppose that $n=\frac {q^{2m}-1}{a}$ with $a|(q^{m}+1)$ and $3\leq a \leq 2(q^{2}-q+1)$. This paper mainly determines the actual maximum designed distance of Hermitian dual-containing Bose-Chaudhuri-Hocquenghem (BCH) codes over $\mathbb {F}_{q^{2}}$ of length $n$. Firstly, we give the maximum designed distance $\delta _{m,a}^{R}$ of narrow-sense Hermitian dual-containing BCH codes. Secondly, we show that there are also non-narrow-sense ones of designed distance up to $\delta _{m,a}^{R}$. It is worth mentioning that our maximum designed distance $\delta _{m,a}^{R}>\lceil \frac {a}{2}\rceil \delta _{m}^{A}$ , where $\delta _{m}^{A}$ is given by Aly et al. (IEEE Trans. Inf. Theory, vol. 53, no. 3, pp. 1183-1188, 2007). Thus, many families of Hermitian dual-containing BCH codes with relatively large designed distance are obtained. Using the Hermitian construction to them, we can subsequently construct different classes of nonprimitive quantum codes, which are new in the sense that their parameters are not covered in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
116. Tone Reservation for OFDM With Restricted Carrier Set.
- Author
-
Boche, Holger and Monich, Ullrich J.
- Subjects
ORTHOGONAL frequency division multiplexing ,FREQUENCY division multiple access - Abstract
The tone reservation method can be used to reduce the peak to average power ratio (PAPR) in orthogonal frequency division multiplexing (OFDM) transmission systems. In this paper, the tone reservation method is analyzed for OFDM with a restricted carrier set, where only the positive carrier frequencies are used. Performing a fully analytical analysis, we give a complete characterization of the information sets for which the PAPR problem is solvable. To derive our main result, we connect the PAPR problem with a geometric functional analytic property of certain spaces. Furthermore, we present applications of our theory that give guidelines for choosing the information carriers in the finite setting and discuss a probabilistic approach for selecting the carriers. In addition, it is shown that if there exists an information sequence for which the PAPR problem is not solvable, then the set of information sequences for which the PAPR problem is not solvable is a residual set. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
117. Near Optimal Coded Data Shuffling for Distributed Learning.
- Author
-
Adel Attia, Mohamed and Tandon, Ravi
- Subjects
MACHINE learning ,CODING theory ,DISTRIBUTED databases ,DATA transmission systems - Abstract
Data shuffling between distributed cluster of nodes is one of the critical steps in implementing large-scale learning algorithms. Randomly shuffling the data-set among a cluster of workers allows different nodes to obtain fresh data assignments at each learning epoch. This process has been shown to provide improvements in the learning process (via testing and training error). However, the statistical benefits of distributed data shuffling come at the cost of extra communication overhead from the master node to worker nodes, and can act as one of the major bottlenecks in the overall time for computation. There has been significant recent interest in devising approaches to minimize this communication overhead. One approach is to provision for extra storage at the computing nodes. The other emerging approach is to leverage coded communication to minimize the overall communication overhead. The focus of this work is to understand the fundamental tradeoff between the amount of storage and the communication overhead for distributed data shuffling. In this paper, we first present an information theoretic formulation for the data shuffling problem, accounting for the underlying problem parameters (number of workers, $K$ , number of data points, $N$ , and available storage, and $S$ per node). We then present an information theoretic lower bound on the communication overhead for data shuffling as a function of these parameters. We next present a novel coded communication scheme and show that the resulting communication overhead of the proposed scheme is within a multiplicative factor of at most $\frac {K}{K-1}$ from the lower bound (which is upper bounded by 2 for $K\geq 2$). Furthermore, we present new results towards closing this gap through a novel coded communication scheme, which we call the aligned coded shuffling. This scheme is inspired by the ideas of coded shuffling and interference alignment. In particular, we show that the aligned scheme achieves the optimal storage vs communication trade-off for $K < 5$ , and further reduces the maximum multiplicative gap down to $\frac {K-\frac {1}{3}}{K-1}$ , for $K\geq 5$. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
118. On the Complete Weight Distribution of Subfield Subcodes of Algebraic-Geometric Codes.
- Author
-
Chan, Chin Hei and Xiong, Maosheng
- Subjects
LINEAR codes ,CIPHERS ,HAMMING distance ,DEVIATION (Statistics) ,DIVISOR theory ,ERROR correction (Information theory) - Abstract
In this paper, we first study deviations of the complete weight distribution of a linear code from that of a random code. Then, we consider a large family of subfield subcodes of algebraic-geometric codes over prime fields which include BCH codes and Goppa codes and prove that the complete weight distribution is close to that of a random code if the code length is large compared with the genus of the curve and the degree of the divisor defining the code. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
119. High-Meets-Low: Construction of Strictly Almost Optimal Resilient Boolean Functions via Fragmentary Walsh Spectra.
- Author
-
Zhang, WeiGuo
- Subjects
BOOLEAN functions ,ODD numbers ,STREAM ciphers ,CONSTRUCTION - Abstract
This paper considers the construction of resilient Boolean functions on an odd number of variables with strictly almost optimal (SAO) nonlinearity. Through introducing the fragmentary Walsh transform, a construction technique called “High-Meets-Low” is proposed. The detailed design procedures of a 39-variable 3-resilient Boolean function with SAO nonlinearity $2^{38}-2^{19}+2^{16}+2^{14}$ are given. It is shown that the nonlinearity of an $n$ -variable $t$ -resilient Boolean function can reach $2^{n-1}-2^{(n-1)/2}+5\cdot 2^{(n-11)/2}$ or $2^{n-1}-2^{(n-1)/2}+2^{(n-7)/2}$ , which are the largest known values for the corresponding $n$ and $t$ values. Finally, by constructing a 29-variable balanced Boolean function with SAO nonlinearity $2^{28}-2^{14}+2^{10}+2^{9}$ , we show an alternative method to realize the High-Meets-Low construction technique. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
120. Weierstrass Pure Gaps on Curves With Three Distinguished Points.
- Author
-
Filho, Herivelto Martins Borges and Cunha, Gregory Duran
- Subjects
ALGEBRAIC codes ,ALGEBRAIC geometry ,ALGEBRAIC curves ,PLANE curves ,DIVISOR theory - Abstract
Let $ \mathbb K$ be an algebraically closed field. In this paper, we consider the class of smooth plane curves of degree $n+1>3$ over $ \mathbb K$ , containing three points, $P_{1},P_{2}$ , and $P_{3}$ , such that $nP_{1}+P_{2}$ , $nP_{2}+P_{3}$ , and $nP_{3}+P_{1}$ are divisors cut out by three distinct lines. For such curves, we determine the dimension of certain special divisors supported on $\{P_{1},P_{2},P_{3}\}$ , as well as an explicit description of all pure gaps at each nonempty subset of the distinguished points $P_{1},P_{2}$ , and $P_{3}$. When $ \mathbb K=\overline { \mathbb F}_{q}$ , this class of curves, which includes the Hermitian curve, is used to construct algebraic geometry codes having minimum distance better than the Goppa bound. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
121. Variations on a Theme by Massey.
- Subjects
RENYI'S entropy ,UNPUBLISHED materials ,DISTRIBUTION (Probability theory) ,ENTROPY ,DIFFERENTIAL entropy ,RANDOM variables - Abstract
In 1994, Jim Massey proposed the guessing entropy as a measure of the difficulty that an attacker has to guess a secret used in a cryptographic system, and established a well-known inequality between entropy and guessing entropy. Over 15 years before, in an unpublished work, he also established a well-known inequality for the entropy of an integer-valued random variable of given variance. In this paper, we establish a link between the two works by Massey in the more general framework of the relationship between discrete (absolute) entropy and continuous (differential) entropy. Two approaches are given in which the discrete entropy (or Rényi entropy) of an integer-valued variable can be upper bounded using the differential (Rényi) entropy of some suitably chosen continuous random variable. As an application, lower bounds on guessing entropy and guessing moments are derived in terms of entropy or Rényi entropy (without side information) and conditional entropy or Arimoto conditional entropy (when side information is available). [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
122. Clustering-Correcting Codes.
- Author
-
Shinkar, Tal, Yaakobi, Eitan, Lenz, Andreas, and Wachter-Zeh, Antonia
- Subjects
SEQUENTIAL analysis ,COPY editing - Abstract
A new family of codes, called clustering-correcting codes, is presented in this paper. This family of codes is motivated by the special structure of the data that is stored in DNA-based storage systems. The data stored in these systems has the form of unordered sequences, also called strands, and every strand is synthesized thousands to millions of times, where some of these copies are read back during sequencing. Due to the unordered structure of the strands, an important task in the decoding process is to place them in their correct order. This is usually accomplished by allocating part of the strand for an index. However, in the presence of errors in the index field, important information on the order of the strands may be lost. Clustering-correcting codes ensure that if the distance between the index fields of two strands is small, their data fields have large distance. It is shown how this property enables to place the strands together in their correct clusters even in the presence of errors. We present lower and upper bounds on the size of clustering-correcting codes and an explicit construction of these codes which uses only a single symbol of redundancy. The results are first presented for the Hamming metric and are then extended for the edit distance. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
123. Codes for Information Retrieval With Small Uncertainty.
- Author
-
Junnila, Ville and Laihonen, Tero
- Subjects
INFORMATION retrieval ,CODING theory ,UNCERTAINTY (Information theory) ,COMPUTER storage devices ,INFORMATION theory ,GRID computing - Abstract
In a recent paper by Yaakobi and Bruck, the problem of information retrieval in associative memories has been considered. In an associative memory, each memory entry is associated to the neighboring entries. When searching information, a fixed number of input clues are given and the output set is formed by the entries associated to all the input clues. The maximum size of an output set is called the uncertainty of the associative memory. In this paper, we study the problem of information retrieval in associative memories with small uncertainty. In particular, we concentrate on the cases where the memory entries and their associations form a binary Hamming space or an infinite square grid. Particularly, we focus on minimizing the number of input clues needed to retrieve information with small uncertainty and present good constructions some of which are optimal, i.e., use the smallest possible number of clues. [ABSTRACT FROM AUTHOR]
- Published
- 2014
- Full Text
- View/download PDF
124. Construction of MDS Codes With Complementary Duals.
- Author
-
Jin, Lingfei
- Subjects
LINEAR codes ,LIQUID crystal displays ,REED-Solomon codes ,DIGITAL signal processing -- Mathematics ,COMPUTER science - Abstract
A linear complementary dual (LCD) code is a linear code with complimentary dual. LCD codes have been extensively studied in literature. On the other hand, maximum distance separable (MDS) codes are an important class of linear codes that have found wide applications in both theory and practice. However, little is known about MDS codes with complimentary duals. The main purpose of this paper is to construct several classes of MDS codes with complimentary duals, i.e., LCD MDS codes, through generalized Reed-Solomon codes. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
125. Meta-Fibonacci Codes:Efficient Universal Coding of Natural Numbers.
- Author
-
Avila, Bruno T. and Campello De Souza, Ricardo M.
- Subjects
NATURAL numbers ,FIBONACCI sequence ,NUMBER systems ,MATHEMATICAL bounds ,DATA compression - Abstract
In this paper, we address the problem of the universal coding of natural numbers. A new numeration system is introduced, which is based on variable- $r$ meta-Fibonacci sequences and it is a generalization of the Zeckendorf numeration system. This new numeration system is used to construct binary, prefix-free, uniquely decodable universal codes called meta-Fibonacci codes. The main advantage of these codes is that they are parametrized by a sequence of numbers, the sequence o. By controlling the growth of the values of this sequence, we can control the length of the code word. This means that we can provide a general framework for building efficient universal coders for natural numbers. Such framework is applied to the upper bounds of the code word length defined by Leung-Yan-Cheong and Cover (1978), Levenshtein (1968), and Ahlswede (1997). There is no other code meeting these bounds. In each case, we build meta-Fibonacci codes and demonstrate that the upper bound of their code word length is satisfied up to an additive constant, thereby solving these open problems. The framework may be applied to other upper bounds that satisfy Kraft inequality. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
126. List Decoding of Crisscross Errors.
- Author
-
Wachter-Zeh, Antonia
- Subjects
DECODERS & decoding ,INFORMATION filtering ,ERROR correction (Information theory) ,ERROR-correcting codes ,FINITE fields - Abstract
In this paper, list decoding of crisscross errors in arrays over finite fields is considered. For this purpose, the so-called cover metric is used, where the cover of a matrix is a set of rows and columns which contains all non-zero elements of the matrix. A Johnson-like upper bound on the maximum list size in the cover metric is derived, showing that the list of codewords has polynomial size up to a certain radius. Furthermore, a simple list decoding algorithm for a known optimal code construction is presented, which decodes errors in the cover metric up to our upper bound. These results reveal significant differences between the cover metric and the rank metric and show that the cover metric is more suitable for correcting crisscross errors. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
127. Quantum Codes Derived From Certain Classes of Polynomials.
- Author
-
Zhang, Tao and Ge, Gennian
- Subjects
CODING theory ,QUANTUM computing ,POLYNOMIALS ,SET theory ,ERROR correction (Information theory) - Abstract
One central theme in quantum error-correction is to construct quantum codes that have a relatively large minimum distance. In this paper, we first present a construction of classical linear codes based on certain classes of polynomials. Through these classical linear codes, we are able to obtain some new quantum codes. It turns out that some of the quantum codes exhibited here have better parameters than the ones available in the literature. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
128. Finite-Length Analysis of Caching-Aided Coded Multicasting.
- Author
-
Shanmugam, Karthikeyan, Ji, Mingyue, Tulino, Antonia M., Llorca, Jaime, and Dimakis., Alexandros G.
- Subjects
MULTICASTING (Computer networks) ,MULTICASTING protocols ,FINITE difference method ,FINITE differences ,GEOMETRICAL drawing - Abstract
We study a noiseless broadcast link serving K users whose requests arise from a library of N files. Every user is equipped with a cache of size M files each. It has been shown that by splitting all the files into packets and placing individual packets in a random independent manner across all the caches prior to any transmission, at most N/M file transmissions are required for any set of demands from the library. The achievable delivery scheme involves linearly combining packets of different files following a greedy clique cover solution to the underlying index coding problem. This remarkable multiplicative gain of random placement and coded delivery has been established in the asymptotic regime when the number of packets per file F scales to infinity. The asymptotic coding gain obtained is roughly t=KM/N . In this paper, we initiate the finite-length analysis of random caching schemes when the number of packets F is a function of the system parameters . Furthermore, for any clique cover-based coded delivery and a large class of random placement schemes that include the existing ones, we show that the number of packets required to get a multiplicative gain of ({4}/{3})g is at least O(({g}/{K})(N/M)^{g-1})$ . We design a new random placement and an efficient clique cover-based delivery scheme that achieves this lower bound approximately. We also provide tight concentration results that show that the average (over the random placement involved) number of transmissions concentrates very well requiring only a polynomial number of packets in the rest of the system parameters. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
129. New Families of Optimal Frequency Hopping Sequence Sets.
- Author
-
Bao, Jingjun and Ji, Lijun
- Subjects
INTERFERENCE (Telecommunication) ,FREQUENCY changers ,COMBINATORICS ,MATRICES (Mathematics) - Abstract
Frequency hopping sequences (FHSs) are employed to mitigate the interferences caused by the hits of frequencies in frequency hopping spread spectrum systems. In this paper, we present combinatorial constructions for FHS sets, including direct constructions by using cyclotomic classes, recursive constructions based on cyclic difference matrices, merging blocks, and discrete logarithm. By these constructions, a number of series of new FHS sets are then produced. These FHS sets are optimal with respect to the Peng–Fan bounds. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
130. The Dual Codes of Several Classes of BCH Codes.
- Author
-
Gong, Binkai, Ding, Cunsheng, and Li, Chengju
- Subjects
LINEAR codes ,HAMMING distance ,TELECOMMUNICATION systems ,CYCLIC codes - Abstract
As a special subclass of cyclic codes, BCH codes have wide applications in communication and storage systems. A BCH code of length $n$ over $\mathbb {F}_{q}$ is always relative to an $n$ -th primitive root of unity $\beta $ in an extension field of $\mathbb {F}_{q}$ , and is called a dually-BCH code if its dual is also a BCH code relative to the same $\beta $. The question as to whether a BCH code is a dually-BCH code is in general very hard to answer. In this paper, an answer to this question for primitive narrow-sense BCH codes and projective narrow-sense ternary BCH codes is given. Sufficient and necessary conditions in terms of the designed distances $\delta $ will be presented to ensure that these BCH codes are dually-BCH codes. In addition, the parameters of the primitive narrow-sense BCH codes and their dual codes are investigated. Some lower bounds on minimum distances of the dual codes of primitive and projective narrow-sense BCH codes are developed. Especially for binary primitive narrow-sense BCH codes, the new bounds on the minimum distances of the dual codes improve the classical Sidel’nikov bound, and are also better than the Carlitz and Uchiyama bound for large designed distances $\delta $. The question as to what subclasses of cyclic codes are BCH codes is also answered to some extent. As a byproduct, the parameters of some subclasses of cyclic codes are also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
131. Non-Binary Diameter Perfect Constant-Weight Codes.
- Subjects
HAMMING codes ,DIAMETER ,FAMILY size ,BINARY codes ,STEINER systems ,HAMMING distance ,ORTHOGONAL arrays - Abstract
Diameter perfect codes form a natural generalization for perfect codes. They are based on the code-anticode bound which generalizes the sphere-packing bound. The code-anticode bound was proved by Delsarte for distance-regular graphs and it holds for some other metrics too. In this paper we prove the bound for non-binary constant-weight codes with the Hamming metric and characterize the diameter perfect codes and the maximum size anticodes for these codes. We distinguish between six families of non-binary diameter constant-weight codes and four families of maximum size non-binary constant-weight anticodes. Each one of these families of diameter perfect codes raises some different questions. We consider some of these questions and leave lot of ground for further research. Finally, as a consequence, some $t$ -intersecting families related to the well-known Erdös-Ko-Rado theorem, are constructed. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
132. On the Global Optimality of Whittle’s Index Policy for Minimizing the Age of Information.
- Author
-
Kriouile, Saad, Assaad, Mohamad, and Maatouk, Ali
- Subjects
INFORMATION society ,MARKOV processes - Abstract
This paper examines the average age minimization problem where only a fraction of the network users can transmit simultaneously over unreliable channels. Finding the optimal scheduling scheme, in this case, is known to be challenging. Accordingly, the Whittle’s index policy was proposed in the literature as a low-complexity heuristic to the problem. Although simple to implement, characterizing this policy’s performance is recognized to be a notoriously tricky task. In the sequel, we provide a new mathematical approach to establish its optimality in the many-users regime for specific network settings. Contrary to previous works in the literature that use restrictive mathematical assumptions, which can be challenging to verify, our novel approach is based on intricate techniques and it is free of any strong mathematical assumptions. These findings showcase that the Whittle’s index policy has analytically provable asymptotic optimality for the AoI minimization problem. Finally, we lay out numerical results that corroborate our theoretical findings and demonstrate the policy’s notable performance in the many-users regime. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
133. Hexagonal Run-Length Zero Capacity Region—Part I: Analytical Proofs.
- Author
-
Congero, Spencer and Zeger, Kenneth
- Subjects
RUN-length encoding - Abstract
The zero capacity region for hexagonal $(d,k)$ run-length constraints is known for many, but not all, $d$ and $k$. The pairs $(d,k)$ for which it has been unproven whether the capacity is zero or positive consist of: (i) $k=d+2$ when $d\ge 2$ ; (ii) $k=d+3$ when $d \ge 1$ ; (iii) $k=d+4$ when either $d=4$ or $d$ is odd and $d \ge 3$ ; and (iv) $k=d+5$ when $d=4$. Here, we prove that the capacity is zero for all of case (i), and for case (ii) whenever $d \ge 7$. The method used in this paper is to reduce an infinite search space of valid labelings to a finite set of configurations that we exhaustively examine using backtracking. In Part II of this two-part series, we use automated procedures to prove that the capacity is zero in case (i) when $2 \le d \le 9$ , in case (ii) when $3 \le d \le 11$ , and in case (iii) when $d \in \{ 4,5,7,9 \}$ , and that the capacity is positive in case (ii) when $d \in \{1,2\}$ , in case (iii) when $d = 3$ , and in case (iv). Thus, the only remaining unknown cases are now when $k=d+4$ , for any odd $d \ge 11$. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
134. Integer Ring Sieve for Constructing Compact QC-LDPC Codes With Girths 8, 10, and 12.
- Author
-
Tasdighi, Alireza and Boutillon, Emmanuel
- Subjects
RINGS of integers ,SIEVES ,BLOCK codes ,SYMMETRIC matrices ,TANNER graphs ,EXPONENTS - Abstract
This paper proposes a new method of constructing compact fully-connected Quasi-Cyclic Low Density Parity Check (QC-LDPC) codes with girth $g$ = 8, 10, and 12. The originality of the proposed method is to impose constraints on the exponent matrix P to reduce the search space drastically. For a targeted lifting degree of $N$ , the first step of the method is to sieve the integer ring $\mathbb {Z}_{N}$ to make a particular sub-group with specific properties to construct the second column of P (the first column being filled with zeros). The remaining columns of P are determined recursively as multiples of the second column by adapting the sequentially multiplied column (SMC) method whereby a controlled greedy search is applied at each step. The codes constructed with the proposed semi-algebraic method show lengths that can be significantly shorter than their best counterparts in the literature. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
135. The Stability of Low-Density Parity-Check Codes and Some of its Consequences.
- Author
-
Liu, Wei and Urbanke, Rudiger
- Subjects
LOW density parity check codes ,DECODING algorithms ,SHIFT registers ,STABILITY criterion - Abstract
We study the stability of low-density parity-check codes under blockwise or bitwise maximum a posteriori decoding, where transmission takes place over a binary-input memoryless output-symmetric channel. Our study stems from the consideration of constructing universal capacity-achieving codes under low-complexity decoding algorithms, where universality refers to the fact that we are considering a family of channels with equal capacity. Consider, e.g., the right-regular sequence by Shokrollahi and the heavy-tail Poisson sequence by Luby et al. Both sequences are provably capacity-achieving under belief propagation decoding when transmission takes place over the binary erasure channel. In this paper we show that many existing capacity-achieving sequences of low-density parity-check codes are not universal under belief propagation decoding. We reveal that the key to showing this non-universality result is determined by the stability of the underlying codes. More concretely, for an ordered and complete channel family and a sequence of low-density parity-check code ensembles, we determine a stability threshold associated with them, which gives rise to a sufficient condition under which the sequence is not universal under belief propagation decoding. Moreover, we show that the same stability threshold applies to blockwise or bitwise maximum a posteriori decoding as well. We demonstrate how the stability threshold can determine an upper bound on the corresponding blockwise or bitwise maximum a posteriori threshold, revealing the operational significance of the stability threshold. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
136. The Subfield Codes of Ovoid Codes.
- Author
-
Ding, Cunsheng and Heng, Ziling
- Subjects
CODING theory ,CIPHERS ,LINEAR codes ,HAMMING weight ,FINITE geometries ,COMBINATORICS ,QUADRICS - Abstract
Ovoids in ${\mathrm {PG}}(3, {\mathrm {GF}}(q))$ have been an interesting topic in coding theory, combinatorics, and finite geometry for a long time. So far only two families of ovoids are known. The first is the elliptic quadrics and the second is the Tits ovoids. It is known that an ovoid in ${\mathrm {PG}}(3, {\mathrm {GF}}(q))$ corresponds to a $[q^{2}+1, 4, q^{2}-q]$ code over ${\mathrm {GF}}(q)$ , which is called an ovoid code. The objectives of this paper are to develop the general theories of subfield codes and to study the subfield codes of the two families of ovoid codes. The dimensions, minimum weights, and the weight distributions of the subfield codes of the elliptic quadric codes and Tits ovoid codes are settled. The parameters of the duals of these subfield codes are also studied. Some of the codes presented in this paper are optimal, and some are distance-optimal. The parameters of the subfield codes are new. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
137. Feeling the Bern: Adaptive Estimators for Bernoulli Probabilities of Pairwise Comparisons.
- Author
-
Shah, Nihar B., Balakrishnan, Sivaraman, and Wainwright, Martin J.
- Subjects
PROBABILITY theory ,PAIRED comparisons (Mathematics) ,STOCHASTIC processes ,LEAST squares ,ACQUISITION of data - Abstract
We study methods for aggregating pairwise comparison data among a collection of $n$ items with the goal of estimating the outcome probabilities for future comparisons. Working within a flexible model that only imposes a form of strong stochastic transitivity, we introduce an “adaptivity index” which compares the risk of our estimator to that of an oracle, over appropriate sub-models, where the oracle knows the specific sub-model in the ground truth. In addition to measuring the usual worst-case risk of an estimator, this adaptivity index also captures the extent to which the estimator adapts to instance-specific difficulty relative to an oracle estimator. First, we propose a three-step estimator termed count-randomize-least squares, and show that it has adaptivity index upper bounded by $\sqrt {n}$ up to logarithmic factors. We then show that conditional on the planted clique hypothesis, no computationally efficient estimator can achieve an adaptivity index smaller than $\sqrt {n}$. Second, we show that a regularized least squares estimator can achieve a poly-logarithmic adaptivity index, thereby demonstrating a $\sqrt {n}$ -gap between optimal and computationally achievable adaptivity. Finally, we prove that the standard least squares estimator, which is known to be optimally adaptive in several closely related problems, fails to adapt in the context of estimating pairwise probabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
138. Improved Upper Bounds and Structural Results on the Capacity of the Discrete-Time Poisson Channel.
- Author
-
Cheraghchi, Mahdi and Ribeiro, Joao
- Subjects
POISSON distribution ,RANDOM variables ,LOGICAL prediction - Abstract
New capacity upper bounds are presented for the discrete-time Poisson channel with no dark current and an average-power constraint. These bounds are a consequence of techniques developed for the seemingly unrelated problem of upper bounding the capacity of binary deletion and repetition channels. Previously, the best known capacity upper bound in the regime where the average-power constraint does not approach zero was due to Martinez (JOSA B, 2007), which is re-derived as a special case of the framework developed in this paper. Furthermore, this framework is carefully instantiated in order to obtain a closed-form bound that improves the result of Martinez everywhere. Finally, capacity-achieving distributions for the discrete-time Poisson channel are studied under an average-power constraint and/or a peak-power constraint and arbitrary dark current. In particular, it is shown that the support of the capacity-achieving distribution under an average-power constraint must only be countably infinite. This settles a conjecture of Shamai (IEE Proceedings I, 1990) in the affirmative. Previously, it was only known that the support must be an unbounded set. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
139. Minimal Linear Codes in Odd Characteristic.
- Author
-
Bartoli, Daniele and Bonini, Matteo
- Subjects
LINEAR codes ,HAMMING weight - Abstract
In this paper, we generalize constructions in two recent works of Ding, Heng, and Zhou to any field $\mathbb {F}_{{q}}$ , ${q}$ odd, providing infinite families of minimal codes for which the Ashikhmin–Barg bound does not hold. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
140. On Computing the Multiplicity of Cycles in Bipartite Graphs Using the Degree Distribution and the Spectrum of the Graph.
- Author
-
Dehghan, Ali and Banihashemi, Amir H.
- Subjects
BIPARTITE graphs ,MULTIPLICITY (Mathematics) ,DISTRIBUTION (Probability theory) ,LOW density parity check codes ,EIGENVALUES - Abstract
Counting short cycles in bipartite graphs is a fundamental problem of interest in the analysis and design of low-density parity-check codes. The vast majority of research in this area is focused on algorithmic techniques. Most recently, Blake and Lin proposed a computational technique to count the number of cycles of length $\boldsymbol {g}$ in a bi-regular bipartite graph, where $\boldsymbol {g}$ is the girth of the graph. The information required for the computation is the node degree and the multiplicity of the nodes on both sides of the partition, as well as the eigenvalues of the adjacency matrix of the graph (graph spectrum). In this paper, the result of Blake and Lin is extended to compute the number of cycles of length $\boldsymbol {g} + \textbf {2}, \ldots, \textbf {2}\boldsymbol {g}-\textbf {2}$ , for bi-regular bipartite graphs, as well as the number of 4-cycles and 6-cycles in irregular and half-regular bipartite graphs, with $\boldsymbol {g} \geq \textbf {4}$ and $\boldsymbol {g} \geq \textbf {6}$ , respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
141. Strong Converses are Just Edge Removal Properties.
- Author
-
Kosut, Oliver and Kliewer, Jorg
- Subjects
INTERFERENCE channels (Telecommunications) ,CAPACITY management (Computers) ,ERROR probability ,INFORMATION theory ,LINEAR network coding - Abstract
This paper explores the relationship between two ideas in the network information theory: edge removal and strong converses. Edge removal properties state that if an edge of small capacity is removed from a network, the capacity region does not change too much. Strong converses state that, for rates outside the capacity region, the probability of error converges to 1 as the blocklength goes to infinity. Various notions of edge removal and strong converse are defined, depending on how edge capacity and error probability scale with blocklength, and relations between them are proved. Each class of strong converse implies a specific class of edge removal. The opposite directions are proved for deterministic networks. Furthermore, a technique based on a novel, causal version of the blowing-up lemma is used to prove that for discrete memoryless networks, the weak edge removal property—that the capacity region changes continuously as the capacity of an edge vanishes—is equivalent to the exponentially strong converse—that outside the capacity region, the probability of error goes to 1 exponentially fast. This result is used to prove exponentially strong converses for several examples, including the discrete two-user interference channel with strong interference, with only a small variation from traditional weak converse proofs. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
142. Radical-Locator Polynomials and Row-Echelon Partial Syndrome Matrices With Applications to Decoding Cyclic Codes.
- Author
-
Lee, Chong-Dao
- Subjects
CYCLIC codes ,DECODERS & decoding ,POLYNOMIALS ,MATRIX decomposition ,INFORMATION retrieval ,SILICON - Abstract
Partial syndrome matrices and weak-locator polynomials have received considerable attention in recent years due to their applications in decoding cyclic codes. The technical contribution of this paper is threefold: 1) cyclic codes can be decoded by the newly proposed partial syndrome matrices, which generalize the previously known results on the determination of error positions; 2) a new type of polynomial associated with error locations, called radical-locator polynomial, is defined, which includes the weak-locator polynomial as a special case; and 3) a novel class of matrices with many zero entries, called row-echelon partial syndrome matrices, is presented, based on the Newton identities, for efficiently decoding cyclic codes. It is also shown that the radical-locator polynomials can be obtained from the determinants of the above-mentioned (row-echelon) partial syndrome matrices. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
143. Repairing Multiple Failures for Scalar MDS Codes.
- Author
-
Mardia, Jay, Bartan, Burak, and Wootters, Mary
- Subjects
BANDWIDTHS ,ELECTRICAL engineering ,REED-Solomon codes ,ENCODING ,DIGITAL signal processing -- Mathematics - Abstract
In distributed storage, erasure codes (like Reed–Solomon Codes) are often employed to provide reliability. In this setting, it is desirable to be able to repair one or more failed nodes while minimizing the repair bandwidth. In this paper, motivated by Reed-Solomon codes, we study the problem of repairing multiple failed nodes in a scalar MDS code. We extend the framework of (Guruswami and Wootters, 2017) to give a framework for constructing repair schemes for multiple failures in general scalar MDS codes in the centralized repair model. We then specialize our framework to Reed–Solomon codes, and also extend and improve upon recent results of (Dau et al., 2017). [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
144. Self-Dual Near MDS Codes from Elliptic Curves.
- Author
-
Jin, Lingfei and Kan, Haibin
- Subjects
CODING theory ,ERROR correction (Information theory) ,COMBINATORICS ,REED-Solomon codes - Abstract
In recent years, self-dual MDS codes have attracted a lot of attention due to theoretical interest and practical importance. Similar to self-dual MDS codes, self-dual near MDS (NMDS for short) codes have nice structures as well. From both theoretical and practical points of view, it is natural to study self-dual NMDS codes. Although there has been lots of work on NMDS codes in literature, little is known for self-dual NMDS codes. It seems more challenging to construct self-dual NMDS codes than self-dual MDS codes. The only work on construction of self-dual NMDS codes shows existence of $q$ -ary self-dual NMDS codes of length $q-1$ for odd prime power $q$ or length up to 16 for some small primes $q$ with $q\le 197$. In this paper, we make use of properties of elliptic curves to construct self-dual NMDS codes. It turns out that, as long as $2|q$ and $n$ is even with $4\le n\le q+\lfloor 2\sqrt {q}\rfloor -2$ , one can construct a self-dual NMDS code of length $n$ over $ {\mathbb {F}}_{q}$. Furthermore, for odd prime power $q$ , there exists a self-dual NMDS code of length $n$ over $ {\mathbb {F}}_{q}$ if $q\ge 4^{n+3}\times (n+3)^{2}$. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
145. From Cages to Trapping Sets and Codewords: A Technique to Derive Tight Upper Bounds on the Minimum Size of Trapping Sets and Minimum Distance of LDPC Codes.
- Author
-
Dehghan, Ali and Banihashemi, Amir H.
- Subjects
LOW density parity check codes ,CODING theory ,ERROR correction (Information theory) ,MATHEMATICAL bounds ,TANNER graphs - Abstract
Cages, defined as regular graphs with minimum number of nodes for a given girth, are well-studied in graph theory. Trapping sets are graphical structures responsible for error floor of low-density parity-check (LDPC) codes, and are well investigated in coding theory. In this paper, we make connections between cages and trapping sets. In particular, starting from a cage (or a modified cage), we construct a trapping set in multiple steps. Based on the connection between cages and trapping sets, we then use the available results in graph theory on cages and derive tight upper bounds on the size of the smallest trapping sets for variable-regular LDPC codes with a given variable degree and girth. The derived upper bounds in many cases meet the best known lower bounds and thus provide the actual size of the smallest trapping sets. Considering that non-zero codewords are a special case of trapping sets, we also derive tight upper bounds on the minimum weight of such codewords, i.e., the minimum distance, of variable-regular LDPC codes as a function of variable degree and girth. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
146. Intrinsic Capacity.
- Author
-
Yang, Shengtian, Xu, Rui, Chen, Jun, and Zhang, Jian-Kang
- Subjects
CONVEX functions ,BINARY control systems ,ENCODING ,CAUSAL models ,GRAPH theory - Abstract
Every channel can be expressed as a convex combination of deterministic channels with each deterministic channel corresponding to one particular intrinsic state. Such convex combinations are, in general, not unique, each giving rise to a specific intrinsic-state distribution. In this paper, we study the maximum and minimum capacities of a channel when the realization of its intrinsic state is causally available at the encoder and/or the decoder. Several conclusive results are obtained for binary-input channels and binary-output channels. By-products of our investigation include a generalization of the Birkhoff–von Neumann theorem and a condition on the uselessness of causal state information at the encoder. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
147. Cooperative Multi-Sender Index Coding.
- Author
-
Li, Min, Ong, Lawrence, and Johnson, Sarah J.
- Subjects
WIRELESS communications ,INFORMATION theory ,ALGORITHMS ,COMPOSITE materials ,CHEMICAL reactions - Abstract
In this paper, we propose a new coding scheme and establish new bounds on the capacity region for the multi-sender unicast index-coding problem. We revisit existing partitioned distributed composite coding (DCC) proposed by Sadeghi et al. and identify its limitations in the implementation of multi-sender composite coding and in the strategy of sender partitioning. We then propose two new coding components to overcome these limitations and develop a multi-sender cooperative composite coding (CCC). We show that CCC can strictly improve upon partitioned DCC, and is the key to achieve optimality for a number of index-coding instances. The usefulness of CCC and its special cases is illuminated via non-trivial examples, and the capacity region is established for each example. Comparisons between CCC and other non-cooperative schemes in recent works are also provided to further demonstrate the advantage of CCC. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
148. Monotonicity of Step Sizes of MSE-Optimal Symmetric Uniform Scalar Quantizers.
- Author
-
Na, Sangsin and Neuhoff, David L.
- Subjects
GAUSSIAN distribution ,LAPLACIAN matrices ,RAYLEIGH model ,PROBABILITY density function ,GAMMA functions - Abstract
For generalized gamma probability densities, this paper studies the monotonicity of step sizes of optimal symmetric uniform scalar quantizers with respect to mean squared-error distortion. The principal results are that for the special cases of Gaussian, Laplacian, two-sided Rayleigh, and gamma densities, optimal step size monotonically decreases when the number of levels $N$ increases by two, and that for any generalized gamma density and all sufficiently large $N$ , optimal step size again decreases when $N$ increases by two. Also, it is shown that for a Laplacian density and sufficiently large $N$ , optimal step size decreases when $N$ increases by just one. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
149. Close to Uniform Prime Number Generation With Fewer Random Bits.
- Author
-
Fouque, Pierre-Alain and Mehdi Tibouchi
- Subjects
NUMERICAL analysis ,PROBABILITY theory ,NANOPARTICLES ,FOUNDATIONS of arithmetic ,CRYSTAL structure - Abstract
In this paper, we analyze several variants of a simple method for generating prime numbers with fewer random bits. To generate a prime p less than x, the basic idea is to fix a constant q ∝ x
1-ε , pick a uniformly random a < q coprime to q, and choose p of the form a + t · q, where only t is updated if the primality test fails. We prove that variants of this approach provide prime generation algorithms requiring a few random bits and whose output distribution is close to uniform, under less and less expensive assumptions: first a relatively strong conjecture by H. Montgomery, made precise by Friedlander and Granville; then the Extended Riemann Hypothesis; and finally fully unconditionally using the Barban-Davenport-Halberstam theorem. We argue that this approach has a number of desirable properties compared with the previous algorithms, at least in an asymptotic sense. In particular: 1) it uses much fewer random bits than both the “trivial algorithm” (testing random numbers less than x for primality) and Maurer's almost uniform prime generation algorithm; 2) the distance of its output distribution to uniform can be made arbitrarily small, unlike algorithms like PRIMEINC (studied by Brandt and Damgård), which we show exhibit significant biases; and 3) all quality measures (number of primality tests, output entropy, randomness, and so on) can be obtained under standard conjectures or even unconditionally, whereas most previous nontrivial algorithms can only be proved based on stronger, less standard assumptions like the Hardy-Littlewood prime tuple conjecture. Note, however, that our analysis involves non-explicit constants, and therefore does not establish the superiority of our approach for concrete parameter sizes. [ABSTRACT FROM AUTHOR]- Published
- 2019
- Full Text
- View/download PDF
150. Optimal Uniform Secret Sharing.
- Author
-
Yoshida, Maki, Fujiwara, Toru, and Fossorier, Marc P. C.
- Subjects
ARCHITECTURE ,ENTROPY ,ENCODING ,SECRECY ,ACCESS to information - Abstract
An important problem in secret sharing schemes is minimizing the share size. For ($k$ , $n$)-threshold schemes and ($k$ , $L$ , $n$)-ramp schemes, constructions that minimize the share size are known. This paper presents optimal constructions for a more general class of access structures in which subsets with the same cardinality have the same amount of information about the secret. We refer to schemes with such uniform access structures as uniform secret sharing. We first derive a tight lower bound for share entropy and then present an optimal construction. Our lower bound exceeds that previously reported. The optimal construction encodes the secret value using one or more ramp schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.