6,528 results
Search Results
2. Torn-Paper Coding.
- Author
-
Shomorony, Ilan and Vahid, Alireza
- Subjects
- *
SEQUENTIAL analysis , *DATA warehousing - Abstract
We consider the problem of communicating over a channel that randomly “tears” the message block into small pieces of different sizes and shuffles them. For the binary torn-paper channel with block length $n$ and pieces of length ${\mathrm{ Geometric}}(p_{n})$ , we characterize the capacity as $C = e^{-\alpha }$ , where $\alpha = \lim _{n\to \infty } p_{n} \log n$. Our results show that the case of ${\mathrm{ Geometric}}(p_{n})$ -length fragments and the case of deterministic length- $(1/p_{n})$ fragments are qualitatively different and, surprisingly, the capacity of the former is larger. Intuitively, this is due to the fact that, in the random fragments case, large fragments are sometimes observed, which boosts the capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Practical Dirty Paper Coding With Sum Codes.
- Author
-
Rege, Kiran M., Balachandran, Krishna, Kang, Joseph H., and Kemal Karakayali, M.
- Subjects
- *
CHANNEL coding , *SIGNAL quantization , *CONSTELLATION diagrams (Signal processing) , *DECODING algorithms , *INTERFERENCE (Telecommunication) - Abstract
In this paper, we present a practical method to construct dirty paper coding (DPC) schemes using sum codes. Unlike the commonly used approach to DPC where the coding scheme involves concatenation of a channel code and a quantization code, the proposed method embodies a unified approach that emulates the binning method used in the proof of the DPC result. Auxiliary bits are used to create the desired number of code vectors in each bin. Sum codes are obtained when information sequences augmented with auxiliary bits are encoded using linear block codes. Sum-code-based DPC schemes can be implemented using any linear block code, and entail a relatively small increase in decoder complexity when compared to standard communication schemes. They can also lead to significant reduction in transmit power in comparison to standard schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
4. On the Dispersions of the Gel’fand–Pinsker Channel and Dirty Paper Coding.
- Author
-
Scarlett, Jonathan
- Subjects
- *
ERROR probability , *GAUSSIAN channels , *RANDOM noise theory , *DISPERSIVE channels (Telecommunication) , *CHANNEL coding - Abstract
This paper studies the second-order coding rates for memoryless channels with a state sequence known non-causally at the encoder. In the case of finite alphabets, an achievability result is obtained using constant-composition random coding, and by using a small fraction of the block to transmit the empirical distribution of the state sequence. For error probabilities less than 0.5, it is shown that the second-order rate improves on an existing one based on independent and identically distributed random coding. In the Gaussian case (dirty paper coding) with an almost-sure power constraint, an achievability result is obtained using random coding over the surface of a sphere, and using a small fraction of the block to transmit a quantized description of the state power. It is shown that the second-order asymptotics are identical to the single-user Gaussian channel of the same input power without a state. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
5. A Unified Framework for One-Shot Achievability via the Poisson Matching Lemma.
- Author
-
Li, Cheuk Ting and Anantharam, Venkat
- Subjects
BROADCAST channels ,SOURCE code ,LOSSY data compression ,CHANNEL coding ,INFORMATION theory ,INFORMATION networks ,PAPER arts - Abstract
We introduce a fundamental lemma called the Poisson matching lemma, and apply it to prove one-shot achievability results for various settings, namely channels with state information at the encoder, lossy source coding with side information at the decoder, joint source-channel coding, broadcast channels, distributed lossy source coding, multiple access channels and channel resolvability. Our one-shot bounds improve upon the best known one-shot bounds in most of the aforementioned settings (except multiple access channels and channel resolvability, where we recover bounds comparable to the best known bounds), with shorter proofs in some settings even when compared to the conventional asymptotic approach using typicality. The Poisson matching lemma replaces both the packing and covering lemmas, greatly simplifying the error analysis. This paper extends the work of Li and El Gamal on Poisson functional representation, which mainly considered variable-length source coding settings, whereas this paper studies fixed-length settings, and is not limited to source coding, showing that the Poisson functional representation is a viable alternative to typicality for most problems in network information theory. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Research on a soft saturation nonlinear SSVEP signal feature extraction algorithm.
- Author
-
Liu, Bo, Gao, Hongwei, Jiang, Yueqiu, and Wu, Jiaxuan
- Abstract
Brain–computer interfaces (BCIs) based on steady-state visual evoked potentials (SSVEP) have received widespread attention due to their high information transmission rate, high accuracy, and rich instruction set. However, the performance of its identification methods strongly depends on the amount of calibration data for within-subject classification. Some studies use deep learning (DL) algorithms for inter-subject classification, which can reduce the calculation process, but there is still much room for improvement in performance compared with intra-subject classification. To solve these problems, an efficient SSVEP signal recognition deep learning network model e-SSVEPNet based on the soft saturation nonlinear module is proposed in this paper. The soft saturation nonlinear module uses a similar exponential calculation method for output when it is less than zero, improving robustness to noise. Under the conditions of the SSVEP data set, two sliding time window lengths (1 s and 0.5 s), and three training data sizes, this paper evaluates the proposed network model and compares it with other traditional and deep learning model baseline methods. The experimental results of the nonlinear module were classified and compared. A large number of experimental results show that the proposed network has the highest average accuracy of intra-subject classification on the SSVEP data set, improves the performance of SSVEP signal classification and recognition, and has higher decoding accuracy under short signals, so it has huge potential ability to realize high-speed SSVEP-based for BCI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Am I, Me, and Who's She? Liberation Psychology, Historical Memory, and Muslim women.
- Author
-
Mohr, Sarah Huxtable
- Subjects
COLLECTIVE memory ,PSYCHOLOGY ,PATRIARCHY ,ISLAMOPHOBIA ,PAPER arts ,MUSLIM women ,MISOGYNY - Abstract
One of the central underpinnings of Islamophobia is the theoretical construction of Muslim women as "Other". Going hand in hand with colonization, the overall Orientalist imaginary has depicted Muslims as misogynistic, homophobic, and gynophobic in contrast to the normal and enlightened Western European subject. Liberation psychology, as a field of decolonial work, emphasizes several main tasks one of which is the recovery of historical memory in relation to how humans see each other and the world. This paper builds on the work of recovering historical memory to emphasize the Indo-European origins of misogyny and patriarchy and the subsequent cover-up of this history as a part of the legacy of colonialism and current narratives of Islamophobia. The paper concludes that the work of psychology should include decoding reality to uncover the true nature of the origins of patriarchy, thus building new, revitalized understandings of human society. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
8. The Distortions Region of Broadcasting Correlated Gaussians and Asymmetric Data Transmission Over a Gaussian BC.
- Author
-
Bross, Shraga I.
- Subjects
DATA transmission systems ,DIGITAL communications ,GAUSSIAN channels ,BROADCAST channels ,ELECTRONIC paper ,VIDEO coding ,DIGITAL video broadcasting - Abstract
A memoryless bivariate Gaussian source is transmitted to a pair of receivers over an average-power limited bandwidth-matched Gaussian broadcast channel. Based on their observations, Receiver 1 reconstructs the first source component while Receiver 2 reconstructs the second source component both seeking to minimize the expected squared-error distortions. In addition to the source transmission digital information at a specified rate should be conveyed reliably to Receiver 1–the “stronger” receiver. Given the message rate we characterize the achievable distortions region. Specifically, there is an ${\sf SNR}$ -threshold below which Dirty Paper coding of the digital information against a linear combination of the source components is optimal. The threshold is a function of the digital information rate, the source correlation and the distortion at the “stronger” receiver. Above this threshold a Dirty Paper coding extension of the Tian-Diggavi-Shamai hybrid scheme is shown to be optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
9. Localized Error Correction in Projective Space.
- Author
-
Cai, Ning
- Subjects
- *
PAPER arts , *ERRORS , *CODING theory , *DIMENSIONS - Abstract
In this paper, we extend the localized error correction code introduced by L. A. Bassalygo and coworkers from Hamming space to projective space. For constant dimensional localized error correction codes in projective space, we have a lower bound and an upper bound of the capacity, which are asymptotically tight when z< x\leq n-z\over 2, where x, z, and n are dimensions of codewords, error configurations, and the ground space, respectively. We determine the capacity of nonconstant dimensional localized error correction codes when z < n\over 3. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
10. On the Multiple-Access Channel With Common Rate-Limited Feedback.
- Author
-
Shaviv, Dor and Steinberg, Yossef
- Subjects
- *
ENCODING , *PAPER arts , *MATHEMATICS , *MARKOV processes , *GAUSSIAN channels - Abstract
This paper studies the multiple-access channel (MAC) with rate-limited feedback. The channel output is encoded into one stream of bits, which is provided causally to the two users at the channel input. An achievable rate region for this setup is derived, based on superposition of information, block Markov coding, and coding with various degrees of side information for the feedback link. The suggested region coincides with the Cover–Leung inner bound for large feedback rates. The result is then extended for cases where there is only a feedback link to one of the transmitters, and for a more general case where there are two separate feedback links to both transmitters. We compute achievable regions for the Gaussian MAC and for the binary erasure MAC. The Gaussian region is computed for the case of common rate-limited feedback, whereas the region for the binary erasure MAC is computed for one-sided feedback. It is known that for the latter, the Cover–Leung region is tight, and we obtain results that coincide with the feedback capacity region for high feedback rates. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
11. Why One and Two Do Not Make Three: Dictionary Form Revisited
- Author
-
Anna Dziemianko
- Subjects
paper dictionaries ,electronic dictionaries ,dictionary use ,encoding ,decoding ,retention ,research methods ,replication ,menus ,highlighting ,noise ,access ,entry length ,Philology. Linguistics ,P1-1091 ,Languages and literature of Eastern Asia, Africa, Oceania ,PL1-8844 ,Germanic languages. Scandinavian languages ,PD1-7159 - Abstract
The primary aim of the article is to compare the usefulness of paper and electronic versions of OALDCE7 (Wehmeier 2005) for language encoding, decoding and learning. It is explained why, in contrast to Dziemianko's (2010) findings concerning COBUILD6 (Sinclair 2008), but in keeping with her observations (Dziemianko 2011) with regard to LDOCE5 (Mayor 2009), the e-version of OALDCE7 proved to be no better for language reception, production and learning than the dictionary in book form. An attempt is made to pinpoint the micro- and macrostructural design features which make e-COBUILD6 a better learning tool than e-OALDCE7 and e-LDOCE5. Recommendations concerning further research into the significance of the medium (paper vs. electronic) in the process of dictionary use conclude the study. The secondary aim which the paper attempts to achieve is to present the status of replication as a scientific research method and justify its use in lexicography.
- Published
- 2012
- Full Text
- View/download PDF
12. Fast Detection Fusion Network (FDFnet): An End to End Object Detection Framework Based on Heterogeneous Image Fusion for Power Facility Inspection.
- Author
-
Xu, Xiang, Liu, Gang, Bavirisetti, Durga Prasad, Zhang, Xiangbo, Sun, Boyang, and Xiao, Gang
- Subjects
OBJECT recognition (Computer vision) ,IMAGE fusion ,FEATURE extraction ,IMAGING systems ,COMPUTATIONAL complexity ,FACILITIES - Abstract
Visual surveillance for autonomous power facility inspection is considered to bethe prominent field of study in the power industry. This research field completely focuses on either object detection or image fusion which lacks the overall consideration. By considering this, a single end-to-end object detection method by incorporating the image fusion named Fast Detection Fusion Network (FDFNet) is proposed in this paper to output qualitative fused images with detection results. The parameters in the FDFnet are greatly reduced by sharing the feature extraction network between image fusion and object detection tasks, due to which a huge reduction in computational complexity is achieved. On this basis, the object detection algorithm performance on various types of power facility images is compared and analyzed. For experimentation purposes, an IR (infrared) and VIS (visible) image acquisition system has also been designed. In addition, a dataset named CVPower with different sets of images for power facility fusion detection is constructed for this research field. Experimental results demonstrate that the proposed method can achieve the mAP of not less than 70%, process 2 frames per second, and produce high qualitative fused images. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
13. On Polar Coding for Side Information Channels.
- Author
-
Beilin, Barak and Burshtein, David
- Subjects
- *
CHANNEL coding , *SOURCE code , *BINARY codes - Abstract
We propose a successive cancellation list (SCL) encoding and decoding scheme for the Gelfand Pinsker (GP) problem based on the known nested polar coding scheme. It applies SCL encoding for the source coding part, and SCL decoding with a properly defined CRC for the channel coding part. The scheme shows improved performance compared to the existing method. A known issue with nested polar codes for binary dirty paper is the existence of frozen channel code bits that are not frozen in the source code. These bits need to be retransmitted in a second phase of the scheme, thus reducing the rate and increasing the required blocklength. We provide an improved bound on the size of this set, and on its scaling with respect to the blocklength, when the Bhattacharyya parameter of the test channel used for source coding is sufficiently large, or the Bhattacharyya parameter of the channel seen at the decoder is sufficiently small. The result is formulated for an arbitrary binary-input memoryless GP problem, since unlike the previous results, it does not require degradedness of the two channels mentioned above. Finally, we present simulation results for binary dirty paper and noisy write once memory codes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
14. Decoding of MDP Convolutional Codes over the Erasure Channel under Linear Systems Point of View.
- Author
-
García-Planas, Maria Isabel and Um, Laurence E.
- Subjects
LINEAR dynamical systems ,BLOCK codes ,TIME complexity ,SYSTEMS theory ,LINEAR systems ,CONTROLLABILITY in systems engineering - Abstract
This paper attempts to highlight the decoding capabilities of MDP convolutional codes over the erasure channel by defining them as discrete linear dynamical systems, with which the controllability property and the observability characteristics of linear system theory can be applied, in particular those of output observability, easily described using matrix language. Those are viewed against the decoding capabilities of MDS block codes over the same channel. Not only is the time complexity better but the decoding capabilities are also increased with this approach because convolutional codes are more flexible in handling variable-length data streams than block codes, where they are fixed-length and less adaptable to varying data lengths without padding or other adjustments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. The Effect of Local Decodability Constraints on Variable-Length Compression.
- Author
-
Pananjady, Ashwin and Courtade, Thomas A.
- Subjects
CODING theory ,BINARY sequences ,TELECOMMUNICATION ,COMPUTATIONAL complexity ,DATA structures - Abstract
We consider a variable-length source coding problem subject to local decodability constraints. In particular, we investigate the blocklength scaling behavior attainable by encodings of $r$ -sparse binary sequences, under the constraint that any source bit can be correctly decoded upon probing at most $d$ codeword bits. We consider both adaptive and non-adaptive access models, and derive upper and lower bounds that often coincide up to constant factors. Such a characterization for the fixed-blocklength analog of our problem, known as the bit probe complexity of static membership, remains unknown despite considerable attention from researchers over the last few decades. We also show that locally decodable schemes for sparse sequences are able to decode 0s (frequent source symbols) of the source with far fewer probes on average than they can decode 1s (infrequent source symbols), thus rigorizing the notion that infrequent symbols require high probe complexity, even on average. Connections to the fixed-blocklength model and to communication complexity are also briefly discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
16. Overview and Efficiency of Decoder-Side Depth Estimation in MPEG Immersive Video.
- Author
-
Mieloch, Dawid, Garus, Patrick, Milovanovic, Marta, Jung, Joel, Jeong, Jun Young, Ravi, Smitha Lingadahalli, and Salahieh, Basel
- Subjects
VIDEO coding ,MACHINE learning ,VIDEOS ,VIDEO codecs ,BINARY sequences ,VIDEO processing - Abstract
This paper presents the overview and rationale behind the Decoder-Side Depth Estimation (DSDE) mode of the MPEG Immersive Video (MIV) standard, using the Geometry Absent profile, for efficient compression of immersive multiview video. A MIV bitstream generated by an encoder operating in the DSDE mode does not include depth maps. It only contains the information required to reconstruct them in the client or in the cloud: decoded views and metadata. The paper explains the technical details and techniques supported by this novel MIV DSDE mode. The description additionally includes the specification on Geometry Assistance Supplemental Enhancement Information which helps to reduce the complexity of depth estimation, when performed in the cloud or at the decoder side. The depth estimation in MIV is a non-normative part of the decoding process, therefore, any method can be used to compute the depth maps. This paper lists a set of requirements for depth estimation, induced by the specific characteristics of the DSDE. The depth estimation reference software, continuously and collaboratively developed with MIV to meet these requirements, is presented in this paper. Several original experimental results are presented. The efficiency of the DSDE is compared to two MIV profiles. The combined non-transmission of depth maps and efficient coding of textures enabled by the DSDE leads to efficient compression and rendering quality improvement compared to the usual encoder-side depth estimation. Moreover, results of the first evaluation of state-of-the-art multiview depth estimators in the DSDE context, including machine learning techniques, are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
17. Variable-Length Coding With Shared Incremental Redundancy: Design Methods and Examples.
- Author
-
Wang, Haobo, Ranganathan, Sudarsan V. S., and Wesel, Richard D.
- Subjects
VIDEO coding ,LOW density parity check codes ,PARALLEL processing ,REDUNDANCY in engineering - Abstract
Variable-length (VL) coding with feedback is a commonly used technique that can approach point-to-point Shannon channel capacity with a significantly shorter average codeword length than fixed-length coding without feedback. This paper uses the inter-frame coding of Zeineddine and Mansour, originally introduced to address varying channel-state conditions in broadcast wireless communication, to approach capacity on point-to-point channels using VL codes without feedback. The per-symbol complexity is comparable to decoding the VL code with feedback (plus the additional complexity of a small peeling decoder amortized over many VL codes) and presents the opportunity for encoders and decoders that utilize massive parallel processing, where each VL decoder can process simultaneously. This paper provides an analytical framework and a design process for the degree distribution of the inter-frame code that allows the feedback-free system to achieve 96% or more of the throughput of the original VL code with feedback. As examples of VL codes, we consider non-binary (NB) low-density parity-check (LDPC), binary LDPC, and convolutional VL codes. The NB-LDPC VL code with an 8-bit CRC and an average codeword length of 336 bits achieves 85% of capacity with four rounds of ACK/NACK feedback. The proposed scheme using shared incremental redundancy without feedback achieves 97% of that performance or 83% of the channel capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
18. Source Coding When the Side Information May Be Delayed.
- Author
-
Simeone, Osvaldo and Permuter, Haim Henri
- Subjects
MARKOV processes ,MEMORY ,PAPER arts ,CHANNEL coding ,INFORMATION theory - Abstract
For memoryless sources, delayed side information at the decoder does not improve the rate-distortion function. However, this is not the case for sources with memory, as demonstrated by a number of works focusing on the special case of (delayed) feedforward. In this paper, a setting is studied in which the encoder is potentially uncertain about the delay with which measurements of the side information, which is available at the encoder, are acquired at the decoder. Assuming a hidden Markov model for the source sequences, at first, a single-letter characterization is given for the setup where the side information delay is arbitrary and known at the encoder, and the reconstruction at the destination is required to be asymptotically lossless. Then, with delay equal to zero or one source symbol, a single-letter characterization of the rate-distortion region is given for the case where, unbeknownst to the encoder, the side information may be delayed or not. Finally, examples for binary and Gaussian sources are provided. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
19. A Texture-Hidden Anti-Counterfeiting QR Code and Authentication Method.
- Author
-
Wang, Tianyu, Zheng, Hong, You, Changhui, and Ju, Jianping
- Subjects
TWO-dimensional bar codes ,GAUSSIAN distribution - Abstract
This paper designs a texture-hidden QR code to prevent the illegal copying of a QR code due to its lack of anti-counterfeiting ability. Combining random texture patterns and a refined QR code, the code is not only capable of regular coding but also has a strong anti-copying capability. Based on the proposed code, a quality assessment algorithm (MAF) and a dual feature detection algorithm (DFDA) are also proposed. The MAF is compared with several current algorithms without reference and achieves a 95% and 96% accuracy for blur type and blur degree, respectively. The DFDA is compared with various texture and corner methods and achieves an accuracy, precision, and recall of up to 100%, and also performs well on attacked datasets with reduction and cut. Experiments on self-built datasets show that the code designed in this paper has excellent feasibility and anti-counterfeiting performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
20. Data Dissemination Using Instantly Decodable Binary Codes in Fog-Radio Access Networks.
- Author
-
Douik, Ahmed and Sorour, Sameh
- Subjects
SELECTIVE dissemination of information ,BINARY codes ,RADIO access networks ,MOBILE communication systems ,LINEAR network coding ,DECODING algorithms - Abstract
This paper considers a device-to-device (D2D) fog-radio access network wherein a set of users are required to store/receive a set of files. The D2D devices are connected to a subset of the cloud data centers and thus possess a subset of the data. This paper is interested in reducing the total time of communication, i.e., the completion time, required to disseminate all files among all devices using instantly decodable network coding (IDNC). Unlike previous studies that assume a fully connected communication network, this paper tackles the more realistic scenario of a partially connected network in which devices are not all in the transmission range of one another. The joint optimization of selecting the transmitting device(s) and the file combination(s) is first formulated, and its intractability is exhibited. The completion time is approximated using the celebrated decoding delay approach by deriving the relationship between the quantities in a partially connected network. The paper introduces the cooperation graph and demonstrates that the problem is equivalent to a maximum weight clique problem over the newly designed graph. Extensive simulations reveal that the proposed solution provides noticeable performance enhancement and outperforms previously proposed IDNC-based schemes. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
21. Decoding kinematic information from beta-band motor rhythms of speech motor cortex: a methodological/analytic approach using concurrent speech movement tracking and magnetoencephalography.
- Author
-
Anastasopoulou, Ioanna, Cheyne, Douglas Owen, van Lieshout, Pascal, and Johnson, Blake Warren
- Subjects
MOTOR cortex ,SPEECH ,SENSORIMOTOR cortex ,MAGNETOENCEPHALOGRAPHY ,SCANNING systems ,ROBUST control ,TRANSCRANIAL magnetic stimulation ,NEUROBIOLOGY - Abstract
Introduction: Articulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until now, however, it has generally not been feasible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Methods: Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), which is technically compatible with magnetoencephalography (MEG) brain scanning systems. In the present paper we describe our methodological and analytic approach for extracting brain motor activities related to key kinematic and coordination event parameters derived from time-registered MASK tracking measurements. Data were collected from 10 healthy adults with tracking coils on the tongue, lips, and jaw. Analyses targeted the gestural landmarks of reiterated utterances/ipa/and/api/, produced at normal and faster rates. Results: The results show that (1) Speech sensorimotor cortex can be reliably located in peri-rolandic regions of the left hemisphere; (2) mu (8-12 Hz) and beta band (13-30 Hz) neuromotor oscillations are present in the speech signals and contain information structures that are independent of those present in higherfrequency bands; and (3) hypotheses concerning the information content of speech motor rhythms can be systematically evaluated with multivariate pattern analytic techniques. Discussion: These results show that MASK provides the capability, for deriving subject-specific articulatory parameters, based on well-established and robust motor control parameters, in the same experimental setup as the MEG brain recordings and in temporal and spatial co-register with the brain data. The analytic approach described here provides new capabilities for testing hypotheses concerning the types of kinematic information that are encoded and processed within specific components of the speech neuromotor system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Gain-Invariant Dirty Paper Coding for Hierarchical OFDM.
- Author
-
Scagliola, Michele, Perez-Gonzalez, Fernando, and Guccione, Pietro
- Subjects
- *
ORTHOGONAL frequency division multiplexing , *DECODERS & decoding , *WIRELESS communications , *PHASE modulation , *RADIO transmitter fading , *DATA transmission systems , *CODING theory - Abstract
A novel approach is here presented to superimpose a low-priority data stream on a high-priority data stream for OFDM systems. The main improvement with respect to conventional hierarchical modulation, which is included with the same purpose in various wireless technologies, is that the low-priority data stream can be decoded from the received subcarrier symbols without any previous equalization for multipath fading channels. As a consequence, the expected performance is nearly invariant of the channel estimation method used to equalize the channel. The low-priority stream is inserted adopting a gain-invariant dirty paper coding based on Rational Dither Modulation, which was proposed for data hiding applications. In this paper an analysis of the developed system is presented and several simulations have been carried out using DVB-T system parameters to verify the validity of the proposed approach. The experimental results show the better performance of the proposed method with respect to that of a conventional hierarchical modulation, particularly when the accuracy of the estimated channel response decreases. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
23. Toward a Practical Scheme for Binary Broadcast Channels with Varying Channel Quality Using Dirty Paper Coding.
- Author
-
Kyung, Gyu Bum and Wang, Chih-Chun
- Subjects
- *
BROADCASTING industry , *CODING theory , *SYSTEMS design , *DECODERS (Electronics) , *COMPUTATIONAL complexity , *RADIO transmitter-receivers , *ITERATIVE methods (Mathematics) - Abstract
We consider practical schemes for binary dirty-paper channels and broadcast channels (BCs) with two receivers and varying channel quality. With the BC application in mind, this paper proposes a new design for binary dirty paper coding (DPC). By exploiting the concept of coset binning, the complexity of the system is greatly reduced when compared to the existing works. Some design challenges of the coset binning approach are identified and addressed. The proposed binary DPC system achieves similar performance to the state-of-the-art, superposition-coding-based system while demonstrating significant advantages in terms of complexity and flexibility of system design. For binary BCs, achieving the capacity generally requires the superposition of a normal channel code and a carefully designed channel code with non-uniform bit distribution. The non-uniform bit distribution is chosen according to the channel conditions. Therefore, to achieve the capacity for binary BCs with varying channel quality, it is necessary to use quantization codes of different rates, which significantly increases the implementation complexity. In this paper, we also propose a broadcast scheme that generalizes the concept of binary DPC, which we term soft DPC. By combining soft DPC with time sharing, we achieve a large percentage of the capacity for a wide range of channel quality with little complexity overhead. Our scheme uses only one fixed pair of codes for users 1 and 2, and a single quantization code, which possesses many practical advantages over traditional time sharing and superposition coding solutions and provides strictly better performance. [ABSTRACT FROM PUBLISHER]
- Published
- 2011
- Full Text
- View/download PDF
24. Adaptive Graph Auto-Encoder for General Data Clustering.
- Author
-
Li, Xuelong, Zhang, Hongyuan, and Zhang, Rui
- Subjects
WEIGHTED graphs ,TASK analysis ,FUZZY clustering technique ,TANNER graphs - Abstract
Graph-based clustering plays an important role in the clustering area. Recent studies about graph neural networks (GNN) have achieved impressive success on graph-type data. However, in general clustering tasks, the graph structure of data does not exist such that GNN can not be applied to clustering directly and the strategy to construct a graph is crucial for performance. Therefore, how to extend GNN into general clustering tasks is an attractive problem. In this paper, we propose a graph auto-encoder for general data clustering, AdaGAE, which constructs the graph adaptively according to the generative perspective of graphs. The adaptive process is designed to induce the model to exploit the high-level information behind data and utilize the non-euclidean structure sufficiently. Importantly, we find that the simple update of the graph will result in severe degeneration, which can be concluded as better reconstruction means worse update. We provide rigorous analysis theoretically and empirically. Then we further design a novel mechanism to avoid the collapse. Via extending the generative graph models to general type data, a graph auto-encoder with a novel decoder is devised and the weighted graphs can be also applied to GNN. AdaGAE performs well and stably in different scale and type datasets. Besides, it is insensitive to the initialization of parameters and requires no pretraining. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
25. Research on the Performance of an End-to-End Intelligent Receiver with Reduced Transmitter Data.
- Author
-
Wang, Mingbo, Wang, Anyi, Zhang, Yuzhi, and Chai, Jing
- Subjects
TRANSMITTERS (Communication) ,TELECOMMUNICATION systems ,DATA transmission systems - Abstract
A large amount of data transmission is one of the challenges faced by communication systems. In this paper, we revisit the intelligent receiver consisting of a neural network, and we find that the intelligent receiver can reduce the data at the transmitting end while improving the decoding accuracy. Specifically, we first construct a smart receiver model, and then design two ways to reduce the data at the transmitter side, namely, end-of-transmitter data trimming and equal-interval data trimming, to investigate the decoding performance of the receiver under the different trimming methods. The simulation results show that the receiver still has an accurate decoding performance with a small amount of trimming at the end of the transmitter data, while the decoding performance of the smart receiver is better than that of the conventional receiver with complete data when the data is trimmed at equal intervals. Moreover, the receiver with equally-spaced data cropping has a lower BER when the data at the transmitter side is reduced by the same data length. This paper provides a new solution to reduce the amount of data at the transmitter side. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
26. Almost Optimal Construction of Functional Batch Codes Using Extended Simplex Codes.
- Author
-
Yohananov, Lev and Yaakobi, Eitan
- Subjects
LINEAR codes ,INFORMATION retrieval ,LOGICAL prediction - Abstract
A functional $k$ -batch code of dimension $s$ consists of $n$ servers storing linear combinations of $s$ linearly independent information bits. Any multiset request of size $k$ of linear combinations (or requests) of the information bits can be recovered by $k$ disjoint subsets of the servers. The goal under this paradigm is to find the minimum number of servers for given values of $s$ and $k$. A recent conjecture states that for any $k=2^{s-1}$ requests the optimal solution requires $2^{s}-1$ servers. This conjecture is verified for $s \leqslant 5$ but previous work could only show that codes with $n=2^{s}-1$ servers can support a solution for $k=2^{s-2} + 2^{s-4} + \left \lfloor{ \frac { 2^{s/2}}{\sqrt {24}} }\right \rfloor $ requests. This paper reduces this gap and shows the existence of codes for $k=\lfloor \frac {5}{6}2^{s-1} \rfloor - s$ requests with the same number of servers. Another construction in the paper provides a code with $n=2^{s+1}-2$ servers and $k=2^{s}$ requests, which is an optimal result. These constructions are mainly based on extended Simplex codes and equivalently provide constructions for parallel Random I/O (RIO) codes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Fundamental Limits of Distributed Linear Encoding.
- Author
-
Khooshemehr, Nastaran Abadi and Maddah-Ali, Mohammad Ali
- Subjects
CODING theory ,FINITE fields ,ENCODING ,LINEAR systems ,CHANNEL coding ,LINEAR codes - Abstract
In general coding theory, we often assume that error is observed in transferring or storing encoded symbols, while the process of encoding itself is error-free. Motivated by recent applications of coding theory, in this paper, we consider the case where the process of encoding is distributed and prone to error. We introduce the problem of distributed encoding, comprised of a set of $K \in \mathbb {N}$ isolated source nodes and $N \in \mathbb {N}$ encoding nodes. Each source node has one symbol from a finite field, which is sent to each of the encoding nodes. Each encoding node stores an encoded symbol from the same field, as a function of the received symbols. However, some of the source nodes are controlled by the adversary and may send different symbols to different encoding nodes. Depending on the number of the adversarial nodes, denoted by $\beta \in \mathbb {N}$ , and the cardinality of the set of symbols that each one generates, denoted by $v \in \mathbb {N}$ , the process of decoding from the encoded symbols could be impossible. Assume that a decoder connects to an arbitrary subset of $t \in \mathbb {N}$ encoding nodes and wants to decode the symbols of the honest nodes correctly, without necessarily identifying the sets of honest and adversarial nodes. An important characteristic of a distributed encoding system is $t^{*} \in \mathbb {N}$ , the minimum of such $t$ , which is a function of $K$ , $N$ , $\beta $ , and $v$. In this paper, we study the distributed linear encoding system, i.e. one in which the encoding nodes use linear coding. We show that $t^{*}_{\textrm {Linear}}=K+2\beta (v-1)$ , if $N\ge K+2\beta (v-1)$ , and $t^{*}_{\textrm {Linear}}=N$ , if $N\le K+2\beta (v-1)$. In order to achieve $t^{*}_{\textrm {Linear}}$ , we use random linear coding and show that in any feasible solution that the decoder finds, the messages of the honest nodes are decoded correctly. In order to prove the converse of the fundamental limit, we show that when the adversary behaves in a particular way, it can always confuse the decoder between two feasible solutions that differ in the message of at least one honest node. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
28. Securing Files Using Hybrid Cryptography.
- Author
-
Padmavathi, V., Kumar, M. Jayanth, and Reddy, M. Saikiran
- Subjects
DATA security ,PUBLIC key cryptography ,DATA encryption ,CRYPTOGRAPHY ,CONFIDENTIAL communications - Abstract
Data Security in terms of file encryption is very crucial, every file nowadays is important as it contains either personal information or industrial data. Hybrid encryption is achieved through data transfer using a combination method of symmetric and asymmetric encryption algorithms. Users can safely send files through hybrid encryption. Asymmetric encryption slows down the encryption process when used alone so symmetric encryption is used for file encryption, advantages of both forms of encryption are utilized. The result is another layer of security with no extra burden to system performance. The idea of hybrid encryption is very simple. Instead of using just AES only to encrypt the file, we use AES to encrypt the file. Then to maintain the secrecy of the key, we encrypt the key using RSA. This discussed paper is a completely different approach which is used for securely storing files. This proposed scheme will also make sure the newly proposed model to have confidentiality and integrity mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
29. Reverse Low-Power Broadside Tests.
- Author
-
Pomeranz, Irith
- Subjects
LOGIC circuits ,INTEGRATED circuit design ,LOGIC circuits testing - Abstract
This paper defines a new type of a low-power broadside test, called a reverse low-power broadside test, whose application requires design-for-testability logic. The unique feature of a reverse low-power broadside test is that it duplicates the switching activity during the second functional capture cycle of a given low-power broadside test, except that signal-transitions are reversed. Thus, the switching activity of a reverse low-power broadside test duplicates that of a low-power broadside test in every subcircuit and on every line individually. In addition, the reversed test detects different faults, and can thus increase the fault coverage of a low-power broadside test set. This paper studies the ability of reverse low-power broadside tests to increase the transition fault coverage in benchmark circuits considering functional broadside tests as well as low-power broadside tests that are not functional. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Optimum Overflow Thresholds in Variable-Length Source Coding Allowing Non-Vanishing Error Probability.
- Author
-
Nomura, Ryo and Yagi, Hideki
- Subjects
SOURCE code ,CODING theory ,ERROR probability ,CHANNEL coding ,BOOLEAN functions ,PROBABILITY theory - Abstract
The variable-length source coding problem allowing the error probability up to some constant is considered for general sources. In this problem, the optimum mean codeword length of variable-length codes has already been determined. On the other hand, in this paper, we focus on the overflow (or excess codeword length) probability instead of the mean codeword length. The infimum of overflow thresholds under the constraint that both of the error probability and the overflow probability are smaller than or equal to some constant is called the optimum overflow threshold. In this paper, we first derive finite-length upper and lower bounds on these probabilities so as to analyze the optimum overflow thresholds. Then, by using these bounds, we determine the general formula of the optimum overflow thresholds in both of the first-order and second-order forms. Next, we consider another expression of the derived general formula so as to reveal the relationship with the optimum coding rate in the fixed-length source coding problem. Finally, we apply the general formula derived in this paper to the case of stationary memoryless sources. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
31. Rate Allocation for Cooperative Orthogonal-Division Channels with Dirty-Paper Coding.
- Author
-
Ng, Cho Yiu, Shum, Kenneth W., Sung, Chi Wan, and Lok, Tat Ming
- Abstract
This paper investigates how much the rate region of the two-user Gaussian interference channel can be enlarged by allowing the two source nodes to cooperate. Two cooperative transmission schemes are proposed, based on dirty-paper coding and the assumption that the radio bandwidth is partitioned into two parts, and each part is utilized by one source node. The achievable rate regions and the outage performance of these two schemes are compared with the simplified Han-Kobayashi scheme, which is an efficient coding scheme for the interference channel. Simulation results show that in some channel realizations, the rate region of the Han-Kobayashi scheme is a subset of the rate regions of our two proposed cooperative transmission schemes. Furthermore, a significant gain in outage performance can be obtained, as the cooperative schemes have twice the diversity order of the simplified Han-Kobayashi scheme. While both cooperative schemes are able to yield large diversity gain, one of them can be implemented by simple decoder. Besides, it has an efficient algorithm for maximizing its weighted sum rate, and can be extended easily to the multi-channel case. [ABSTRACT FROM PUBLISHER]
- Published
- 2010
- Full Text
- View/download PDF
32. Polar Codes for Quantum Reading.
- Author
-
Pereira, Francisco Revson F. and Mancini, Stefano
- Subjects
QUANTUM states ,SQUARE root ,READING ,ERROR probability ,OPEN-ended questions - Abstract
Quantum readout provides a general framework for formulating statistical discrimination of quantum channels. Several paths have been taken for such this problem. However, there is much to be done in the avenue of optimizing channel discrimination using classical codes. At least two open questions can be pointed out: how to construct low complexity encoding schemes that are interesting for channel discrimination and, more importantly, how to develop capacity-achieving protocols. This paper aims at presenting a solution to these questions using polar codes. Firstly, we characterize the information rate and reliability parameter of the channels under polar encoding. We also show that the error probability of the scheme proposed decays exponentially with the square root of the code length. Secondly, an analysis of the optimal quantum states to be used as probes is given. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
33. List Decoding of Insertions and Deletions.
- Author
-
Wachter-Zeh, Antonia
- Subjects
POLYNOMIAL time algorithms ,ALGEBRA ,ALGORITHMS ,DECODERS (Electronics) ,POLYNOMIALS - Abstract
List decoding of insertions and deletions in the Levenshtein metric is considered. The Levenshtein distance between two sequences is the minimum number of insertions and deletions needed to turn one of the sequences into the other. In this paper, a Johnson-like upper bound on the maximum list size when list decoding in the Levenshtein metric is derived. This bound depends only on the length and minimum Levenshtein distance of the code, the length of the received word, and the alphabet size. It shows that polynomial-time list decoding beyond half the Levenshtein distance is possible for many parameters. Further, we also prove a lower bound on list decoding of deletions with the well-known binary Varshamov–Tenengolts codes, which shows that the maximum list size grows exponentially with the number of deletions. Finally, an efficient list decoding algorithm for two insertions/deletions with VT codes is given. This decoder can be modified to a polynomial-time list decoder of any constant number of insertions/deletions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
34. Improved Bounds on Lossless Source Coding and Guessing Moments via Rényi Measures.
- Author
-
Sason, Igal and Verdu, Sergio
- Subjects
CODING theory ,CRYPTOGRAPHY ,SEQUENTIAL decoding ,GAUSSIAN function ,STATISTICAL hypothesis testing - Abstract
This paper provides upper and lower bounds on the optimal guessing moments of a random variable taking values on a finite set when side information may be available. These moments quantify the number of guesses required for correctly identifying the unknown object and, similarly to Arikan’s bounds, they are expressed in terms of the Arimoto-Rényi conditional entropy. Although Arikan’s bounds are asymptotically tight, the improvement of the bounds in this paper is significant in the non-asymptotic regime. Relationships between moments of the optimal guessing function and the MAP error probability are also established, characterizing the exact locus of their attainable values. The bounds on optimal guessing moments serve to improve non-asymptotic bounds on the cumulant generating function of the codeword lengths for fixed-to-variable optimal lossless source coding without prefix constraints. Non-asymptotic bounds on the reliability function of discrete memoryless sources are derived as well. Relying on these techniques, lower bounds on the cumulant generating function of the codeword lengths are derived, by means of the smooth Rényi entropy, for source codes that allow decoding errors. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
35. Polar Channel Coding Schemes for Two-Dimensional Magnetic Recording Systems.
- Author
-
Saito, Hidetoshi
- Subjects
MAGNETIC recorders & recording ,CHANNEL coding ,ERROR correction (Information theory) ,MODULATION coding ,DECODING algorithms - Abstract
This paper proposes new two-dimensional magnetic recording (TDMR) systems using polar channel coding as practical error correction coding. It is known that the time and space complexities of the encoding/decoding algorithms based on polar channel coding are \mathcal O(N \log N) , where $N$ is the codeword (block) length. If we compare the error-correction performance of a polar code with that of a low-density parity-check (LDPC) code in the same rate, it is known that the polar code has a longer length and its decoder still has a lower implementation complexity than the LDPC decoder. Therefore, relatively low-complexity coding schemes are preferable for any TDMR systems under high rates and relatively long codeword lengths. In this paper, the proposed TDMR system serially concatenates a two-dimensional (2-D) modulation code with one-dimensional (1-D) polar codes in each down-track direction. These element polar codes are designed on the fundamentals of the channel polarization theory, which are applied for channels with memory. Actually, it evaluates the performance of the signal processing scheme with concatenated coding and generalized partial response equalization for the proposed TDMR system using bit-patterned media by computer simulations. As a result, it shows that the block error rate performance of the proposed TDMR system with the 2-D modulation and polar channel coding schemes is superior to that of the 1-D system with the conventional 1-D high rate modulation and LDPC coding schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
36. Optimal Construction for Decoding 2D Convolutional Codes over an Erasure Channel.
- Author
-
Pinto, Raquel, Spreafico, Marcos, and Vela, Carlos
- Subjects
BIBLIOGRAPHY ,BLOCK codes ,REED-Solomon codes - Abstract
In general, the problem of building optimal convolutional codes under a certain criteria is hard, especially when size field restrictions are applied. In this paper, we confront the challenge of constructing an optimal 2D convolutional code when communicating over an erasure channel. We propose a general construction method for these codes. Specifically, we provide an optimal construction where the decoding method presented in the bibliography is considered. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. An Investigation of the Cross-Language Transfer of Reading Skills: Evidence from a Study in Nigerian Government Primary Schools.
- Author
-
Humble, Steve, Dixon, Pauline, Gittins, Louise, and Counihan, Chris
- Subjects
READING ,PRIMARY schools ,SCHOOL day ,ENGLISH letters ,LANGUAGE policy ,PHONOLOGICAL awareness ,LANGUAGE transfer (Language learning) - Abstract
This paper investigates the linguistic interdependence of Grade 3 children studying in government primary schools in northern Nigeria who are learning to read in Hausa (L1) and English (L2) simultaneously. There are few studies in the African context that consider linguistic interdependence and the bidirectional influences of literacy skills in multilingual contexts. A total of 2328 Grade 3 children were tested on their Hausa and English letter sound knowledge (phonemes) and reading decoding skills (word) after participating in a two-year English structured reading intervention programme as part of their school day. In Grade 4, these children will become English immersion learners, with English becoming the medium of instruction. Carrying out bivariate correlations, we find a large and strongly positively significant correlation between L1 and L2 test scores. Concerning bidirectionality, a feedback path model illustrates that the L1 word score predicts the L2 word score and vice versa. Multi-level modelling is then used to consider the variation in test scores. Almost two thirds of the variation in the word score is attributable to the pupil level and one third to the school level. The Hausa word score is significantly predicted through Hausa sound and English word score. English word score is significantly predicted through Hausa word and English sound score. The findings have implications for language policy and classroom instruction, showing the importance of cross-language transfer between reading skills. The overall results support bidirectionality and linguistic interdependence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. On Infinite Families of Narrow-Sense Antiprimitive BCH Codes Admitting 3-Transitive Automorphism Groups and Their Consequences.
- Author
-
Liu, Qi, Ding, Cunsheng, Mesnager, Sihem, Tang, Chunming, and Tonchev, Vladimir D.
- Subjects
AUTOMORPHISM groups ,ALGEBRAIC coding theory ,REPRESENTATIONS of groups (Algebra) ,DATA transmission systems ,QUANTUM information science ,GROUP theory ,LINEAR codes ,CYCLIC codes - Abstract
The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-studied subclass of cyclic codes that have found numerous applications in error correction and notably in quantum information processing. They are widely used in data storage and communication systems. A subclass of attractive BCH codes is the narrow-sense BCH codes over the Galois field ${\mathrm {GF}}(q)$ with length $q+1$ , which are closely related to the action of the projective general linear group of degree two on the projective line. Despite its interest, not much is known about this class of BCH codes. This paper aims to study some of the codes within this class and specifically narrow-sense antiprimitive BCH codes (these codes are also linear complementary duals (LCD) codes that have interesting practical recent applications in cryptography, among other benefits). We shall use tools and combine arguments from algebraic coding theory, combinatorial designs, and group theory (group actions, representation theory of finite groups, etc.) to investigate narrow-sense antiprimitive BCH Codes and extend results from the recent literature. Notably, the dimension, the minimum distance of some $q$ -ary BCH codes with length $q+1$ , and their duals are determined in this paper. The dual codes of the narrow-sense antiprimitive BCH codes derived in this paper include almost MDS codes. Furthermore, the classification of ${\mathrm {PGL}}(2, p^{m})$ -invariant codes over ${\mathrm {GF}}(p^{h})$ is completed. As an application of this result, the $p$ -ranks of all incidence structures invariant under the projective general linear group ${\mathrm {PGL}}(2, p^{m})$ are determined. Furthermore, infinite families of narrow-sense BCH codes admitting a 3-transitive automorphism group are obtained. Via these BCH codes, a coding-theory approach to constructing the Witt spherical geometry designs is presented. The BCH codes proposed in this paper are good candidates for permutation decoding, as they have a relatively large group of automorphisms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. Adaptive Loop Filter Hardware Design for 4K ASIC VVC Decoders.
- Author
-
Farhat, Ibrahim, Hamidouche, Wassim, Grill, Adrien, Menard, Daniel, and Deforges, Olivier
- Subjects
ADAPTIVE filters ,VIDEO coding ,HARDWARE ,IMAGE reconstruction - Abstract
Versatile video coding (VVC) is the next generation video coding standard released in July 2020. VVC introduces new coding tools enhancing the coding efficiency compared to its predecessor high efficiency video coding (HEVC). These new tools have a significant impact on the VVC software decoder with a complexity estimated to two times HEVC decoder complexity. In particular, the adaptive loop filter (ALF) introduced in VVC as an in-loop filter increases both the decoding complexity and memory usage. These concerns need to be carefully addressed regarding the design of an efficient hardware implementation of a VVC decoder. In this paper, we present an efficient hardware implementation of the ALF tool for VVC decoder. The proposed solution establishes a novel scanning order between Luma and Chroma components that reduces significantly the ALF memory. The design takes advantage of all ALF features and establishes an unified hardware module for all ALF filters. The design uses 26 regular multipliers in a pipelined architecture with a fixed throughput of 2 pixels/cycle and fixed system latency regardless of the selected filter. This design operates at 600 MHz frequency enabling to decode on ASIC platform a 4K video at 30 frames per second in 4:2:2 chroma sub-sampling format. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
40. Low-Complexity Intra Coding in Versatile Video Coding.
- Author
-
Choi, Kiho, The Van Le, Choi, Yongho, and Lee, Jin Young
- Subjects
VIDEO coding ,CONVOLUTIONAL neural networks ,MARKET entry - Abstract
Versatile Video Coding (VVC) was finalized in 2020 and offered promising coding efficiency with a bitrate reduction of about 50% for same video quality as High Efficiency Video Coding. However, its high encoding complexity is a heavy burden on real-time applications. In particular, the very high complexity in intra coding can be a big barrier into market entry. This paper presents an efficient low-complexity intra coding scheme, which employs downsampling and upsampling processes. The downsampling is simply performed by reducing the resolution of an original video in both horizontal and vertical directions. In the upsampling, convolutional neural network based super-resolution is used to increase the resolution of the reconstructed video. In addition, this paper thoroughly analyzes the performance and complexity of all intra coding tools in VVC. Experimental results demonstrate that the significantly high reduction of the encoding complexity can be achieved with acceptable video quality. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
41. Reconstruction-Computation-Quantization (RCQ): A Paradigm for Low Bit Width LDPC Decoding.
- Author
-
Wang, Linfang, Terrill, Caleb, Stark, Maximilian, Li, Zongwang, Chen, Sean, Hulse, Chester, Kuo, Calvin, Wesel, Richard D., Bauch, Gerhard, and Pitchumani, Rekha
- Subjects
LOW density parity check codes ,FIELD programmable gate arrays ,WIRELESS LANs ,GATE array circuits ,LINEAR network coding - Abstract
This paper uses the reconstruction-computation-quantization (RCQ)paradigm to decode low-density parity-check (LDPC) codes. RCQ facilitates dynamic non-uniform quantization to achieve good frame error rate (FER) performance with very low message precision. For message-passing according to a flooding schedule, the RCQ parameters are designed by discrete density evolution. Simulation results on an IEEE 802.11 LDPC code show that for 4-bit messages, a flooding Min Sum RCQ decoder outperforms table-lookup approaches such as information bottleneck (IB) or Min-IB decoding, with significantly fewer parameters to be stored. Additionally, this paper introduces layer-specific RCQ, an extension of RCQ decoding for layered architectures. Layer-specific RCQ uses layer-specific message representations to achieve the best possible FER performance. For layer-specific RCQ, this paper proposes using layered discrete density evolution featuring hierarchical dynamic quantization (HDQ) to design parameters efficiently. Finally, this paper studies field-programmable gate array (FPGA) implementations of RCQ decoders. Simulation results for a (9472, 8192) quasi-cyclic (QC) LDPC code show that a layered Min Sum RCQ decoder with 3-bit messages achieves more than a 10% reduction in LUTs and routed nets and more than a 6% decrease in register usage while maintaining comparable decoding performance, compared to a 5-bit offset Min Sum decoder. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
42. Cache-Based Matrix Technology for Efficient Write and Recovery in Erasure Coding Distributed File Systems.
- Author
-
Shin, Dong-Jin and Kim, Jeong-Joon
- Subjects
CACHE memory ,OPTICAL disks ,INFORMATION & communication technologies for development ,REED-Solomon codes ,DATA recovery - Abstract
With the development of various information and communication technologies, the amount of big data has increased, and distributed file systems have emerged to store them stably. The replication technique divides the original data into blocks and writes them on multiple servers for redundancy and fault tolerance. However, there is a symmetrical space efficiency problem that arises from the need to store blocks larger than the original data. When storing data, the Erasure Coding (EC) technique generates parity blocks through encoding calculations and writes them separately on each server for fault tolerance and data recovery purposes. Even if a specific server fails, original data can still be recovered through decoding calculations using the parity blocks stored on the remaining servers. However, matrices generated during encoding and decoding are redundantly generated during data writing and recovery, which leads to unnecessary overhead in distributed file systems. This paper proposes a cache-based matrix technique that uploads the matrices generated during encoding and decoding to cache memory and reuses them, rather than generating new matrices each time encoding or decoding occurs. The design of the cache memory applies the Weighting Size and Cost Replacement Policy (WSCRP) algorithm to efficiently upload and reuse matrices to cache memory using parameters known as weights and costs. Furthermore, the cache memory table can be managed efficiently because the weight–cost model sorts and updates matrices using specific parameters, which reduces replacement cost. The experiment utilized the Hadoop Distributed File System (HDFS) as the distributed file system, and the EC volume was composed of Reed–Solomon code with parameters (6, 3). As a result of the experiment, it was possible to reduce the write, read, and recovery times associated with encoding and decoding. In particular, for up to three node failures, systems using WSCRP were able to reduce recovery time by about 30 s compared to regular HDFS systems. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Blind Decoding of Control Channel for Other Users in 3GPP Standards.
- Author
-
Song, Seongwook, Kwon, Hyukjoon, and Kang, Inyup
- Subjects
INTERFERENCE channels (Telecommunications) ,LONG-Term Evolution (Telecommunications) ,DATA packeting ,RADIO resource management ,MODULATION coding - Abstract
This paper explores the blind decoding of control channels for obtaining other user identities in 3GPP specification, such as high-speed packet access and long-term evolution. The reliable decoding of control channels with user identities is crucial to mitigate inter-cell interference as well as multi-user interference. This paper exploits a method of user identity filtering followed by a method of user identity detection based on the traffic persistency, which is common to all standards. Hence, the proposed methods are applicable to all the standards regulated by 3GPP specification. In particular, this paper analyzes the proposed other user identity detection algorithm under the random coding. Simulation results show that the proposed method is reliable even at low SNRs and is also aligned with the analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
44. Optimizing Transmission Lengths for Limited Feedback With Nonbinary LDPC Examples.
- Author
-
Vakilinia, Kasra, Ranganathan, Sudarsan V. S., Divsalar, Dariush, and Wesel, Richard D.
- Subjects
LOW density parity check codes ,RANDOM noise theory ,RADIO transmitter fading ,ELECTRONIC feedback ,TELECOMMUNICATION systems - Abstract
This paper presents a general approach for optimizing the number of symbols in increments (packets of incremental redundancy) in a feedback communication system with a limited number of increments. This approach is based on a tight normal approximation on the rate for successful decoding. Applying this approach to a variety of feedback systems using nonbinary (NB) low-density parity-check (LDPC) codes shows that greater than 90% of capacity can be achieved with average blocklengths fewer than 500 transmitted bits. One result is that the performance with ten increments closely approaches the performance with an infinite number of increments. The paper focuses on binary-input additive-white Gaussian noise (BI-AWGN) channels but also demonstrates that the normal approximation works well on examples of fading channels as well as high-SNR AWGN channels that require larger QAM constellations. This paper explores both variable-length feedback codes with termination (VLFT) and the more practical variable length feedback (VLF) codes without termination that require no assumption of noiseless transmitter confirmation. For VLF, we consider both a two-phase scheme and CRC-based scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
45. On the Sub-Packetization Size and the Repair Bandwidth of Reed-Solomon Codes.
- Author
-
Li, Weiqi, Wang, Zhiying, and Jafarkhani, Hamid
- Subjects
REED-Solomon codes ,BANDWIDTHS ,FINITE fields ,EXPONENTIAL functions ,LINEAR network coding - Abstract
Reed-Solomon (RS) codes are widely used in distributed storage systems. In this paper, we study the repair bandwidth and sub-packetization size of RS codes. The repair bandwidth is defined as the amount of transmitted information from surviving nodes to a failed node. The RS code can be viewed as a polynomial over a finite field $GF(q^\ell)$ evaluated at a set of points, where $\ell $ is called the sub-packetization size. Smaller bandwidth reduces the network traffic in distributed storage, and smaller $\ell $ facilitates the implementation of RS codes with lower complexity. Recently, Guruswami and Wootters proposed a repair method for RS codes when the evaluation points are the entire finite field. While the sub-packetization size can be arbitrarily small, the repair bandwidth is higher than the minimum storage regenerating (MSR) bound. Tamo, Ye, and Barg achieved the MSR bound but the sub-packetization size grows faster than the exponential function of the number of the evaluation points. In this paper, we present code constructions and repair schemes that extend these results to accommodate different sizes of the evaluation points. In other words, we design schemes that provide points in between. These schemes provide a flexible tradeoff between the sub-packetization size and the repair bandwidth. In addition, we generalize our schemes to manage multiple failures. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
46. Blind Group Testing.
- Author
-
Huleihel, Wasim, Elishco, Ohad, and Medard, Muriel
- Subjects
ROBUST control ,NOISE measurement - Abstract
The main goal in group testing is to recover a small subset of defective items from a larger population while efficiently reducing the total number of (possibly noisy) required tests/measurements. Under the assumption that the input-output statistical relationship (i.e., channel law) is known to the recovery algorithm, the fundamental as well as the computational limits of the group testing problem are relatively better understood than when these statistical relationships are unknown. Practical considerations, however, render this assumption inapplicable, and “blind” recovery/estimation procedures, independent of the input-output statistics, are desired. In this paper, we analyze the fundamental limits of a general noisy group testing problem, when this relationship is unknown. Specifically, in the first part of this paper, we propose an efficient scheme, based on the idea of separate-decoding of items (where each item is recovered separately), for which we derive sufficient conditions on the number of tests required for exact recovery. The difficulty in obtaining these conditions stems from the fact that we allow the number of defective items to grow with the population size, which in turn requires delicate concentration analysis of certain probabilities. Furthermore, we show that in several scenarios, our proposed scheme achieves the same performance as that of the corresponding non-blind recovery algorithm (where the input-output statistics are known), implying that the proposed blind scheme is robust/universal. Finally, in the second part of this paper, we propose also an inefficient combinatorial-based scheme (or, “joint-decoding”), for which we derive similar sufficient conditions. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
47. A Study on Iterative Decoding With LLR Modulator Using Neural Network in SMR System.
- Author
-
Nishikawa, M., Nakamura, Y., Osawa, H., Okamoto, Y., and Kanai, Y.
- Subjects
ITERATIVE decoding ,ARTIFICIAL neural networks ,BIOLOGICAL neural networks - Abstract
This paper focuses on the shingled magnetic recording (SMR) as the recording system for further high-density hard disk drives. Previously, we applied a log-likelihood ratio (LLR) modulator to the low-density parity check coding and iterative decoding system in SMR to improve the decoding performance. In this paper, we propose a new LLR modulator using neural network. Furthermore, we clarify that the proposed LLR modulator using neural network realizes effective iterative decoding in a read/write channel with pattern-dependent medium noise. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
48. Optimized Rate-Adaptive Protograph-Based LDPC Codes for Source Coding With Side Information.
- Author
-
Ye, Fangping, Dupraz, Elsa, Mheich, Zeina, and Amis, Karine
- Subjects
VIDEO coding ,LOW density parity check codes ,SOURCE code ,PARITY-check matrix ,DATA transmission systems - Abstract
This paper considers the problem of source coding with side information at the decoder, also called Slepian–Wolf source coding scheme. In practical applications of this coding scheme, the statistical relation between the source and the side information can vary from one data transmission to another, and there is a need to adapt the coding rate depending on the current statistical relation. In this paper, we propose a novel rate-adaptive code construction based on LDPC codes for the Slepian–Wolf source coding scheme. The proposed code design method allows to optimize the code degree distributions at all the considered rates, while minimizing the amount of short cycles in the parity check matrices at all rates. Simulation results show that the proposed method greatly reduces the source coding rate compared to the standard low density parity check accumulated (LDPCA) solution. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
49. Additive, Structural, and Multiplicative Transformations for the Construction of Quasi-Cyclic LDPC Matrices.
- Author
-
Derrien, Alban, Boutillon, Emmanuel, and Cerqueus, Audrey
- Subjects
LOW density parity check codes ,CODING theory ,ERROR-correcting codes ,NUMERICAL analysis ,MATRICES (Mathematics) - Abstract
The construction of a quasi-cyclic low density parity-check (QC-LDPC) matrix is usually carried out in two steps. In the first step, a prototype matrix is defined according to certain criteria (size, girth, check and variable node degrees, and so on). The second step involves the expansion of the prototype matrix. During this last phase, an integer value is assigned to each non-null position in the prototype matrix corresponding to the right-rotation of the identity matrix. The problem of determining these integer values is complex. The state-of-the-art solutions use either some mathematical constructions to guarantee a given girth of the final QC-LDPC code, or a random search of values until the target girth is satisfied. In this paper, we propose an alternative/complementary method that reduces the search space by defining large equivalence classes of topologically identical matrices through row and column permutations using additive, structural, and multiplicative transformations. Selecting only a single element per equivalence class can reduce the search space by a few orders of magnitude. Then, we use the formalism of constraint programming to list the exhaustive sets of solutions for a given girth and a given expansion factor. An example is presented in all sections of the paper to illustrate the methodology. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
50. A Systematic Approach to Incremental Redundancy With Application to Erasure Channels.
- Author
-
Heidarzadeh, Anoosheh, Chamberland, Jean-Francois, Wesel, Richard D., and Parag, Parimal
- Subjects
CODING theory ,CHANNEL coding ,TELECOMMUNICATION channels ,DATA transmission systems ,TELECOMMUNICATION - Abstract
This paper focuses on the design and evaluation of pragmatic schemes for delay-sensitive communication. Specifically, this contribution studies the operation of data links that employ incremental redundancy as a means to shield information bits from the degradation associated with unreliable channels. While this inquiry puts forth a general methodology, exposition centers around erasure channels because they are well suited for analysis. Nevertheless, the goal is to identify both structural properties and design guidelines that are broadly applicable. Conceptually, this paper leverages a methodology, termed sequential differential optimization, aimed at identifying near-optimal block sizes for hybrid ARQ. This technique is applied to erasure channels and it is extended to scenarios where throughput is maximized subject to a constraint on the feedback rate. The analysis shows that the impact of the coding strategy adopted and the propensity of the channel to erase symbols naturally decouple when maximizing throughput. Ultimately, block size selection is informed by approximate distributions on the probability of decoding success at every stage of the incremental transmission process. This novel perspective, which rigorously bridges hybrid automatic repeat request and coding, offers a computationally efficient framework to select code rates and blocklengths for incremental redundancy. These findings are supported through numerical results. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.