5,338 results
Search Results
2. Torn-Paper Coding.
- Author
-
Shomorony, Ilan and Vahid, Alireza
- Subjects
- *
SEQUENTIAL analysis , *DATA warehousing - Abstract
We consider the problem of communicating over a channel that randomly “tears” the message block into small pieces of different sizes and shuffles them. For the binary torn-paper channel with block length $n$ and pieces of length ${\mathrm{ Geometric}}(p_{n})$ , we characterize the capacity as $C = e^{-\alpha }$ , where $\alpha = \lim _{n\to \infty } p_{n} \log n$. Our results show that the case of ${\mathrm{ Geometric}}(p_{n})$ -length fragments and the case of deterministic length- $(1/p_{n})$ fragments are qualitatively different and, surprisingly, the capacity of the former is larger. Intuitively, this is due to the fact that, in the random fragments case, large fragments are sometimes observed, which boosts the capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. On Microstructure Estimation Using Flatbed Scanners for Paper Surface-Based Authentication.
- Author
-
Liu, Runze and Wong, Chau-Wai
- Abstract
Paper surfaces under the microscopic view are observed to be formed by intertwisted wood fibers. Such structures of paper surfaces are unique from one location to another and are almost impossible to duplicate. Previous work used microscopic surface normals to characterize such intrinsic structures as a “fingerprint” of paper for security and forensic applications. In this work, we examine several key research questions of feature extraction in both scientific and engineering aspects to facilitate the deployment of paper surface-based authentication when flatbed scanners are used as the acquisition device. We analytically show that, under the unique optical setup of flatbed scanners, the specular reflection does not play a role in norm map estimation. We verify, using a larger dataset than prior work, that the scanner-acquired norm maps, although blurred, are consistent with those measured by confocal microscopes. We confirm that, when choosing an authentication feature, high spatial-frequency subbands of the heightmap are more powerful than the norm map. Finally, we show that it is possible to empirically calculate the physical dimensions of the paper patch needed to achieve a certain authentication performance in equal error rate (EER). We analytically show that log(EER) is decreasing linearly in the edge length of a paper patch. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
4. Ergodic Fading MIMO Dirty Paper and Broadcast Channels: Capacity Bounds and Lattice Strategies.
- Author
-
Hindy, Ahmed and Nosratinia, Aria
- Abstract
A multiple-input multiple-output (MIMO) version of the dirty paper channel is studied, where the channel input and the dirt experience the same fading process, and the fading channel state is known at the receiver. This represents settings where signal and interference sources are co-located, such as in the broadcast channel. First, a variant of Costa’s dirty paper coding is presented, whose achievable rates are within a constant gap to capacity for all signal and dirt powers. In addition, a lattice coding and decoding scheme is proposed, whose decision regions are independent of the channel realizations. Under Rayleigh fading, the gap to capacity of the lattice coding scheme vanishes with the number of receive antennas, even at finite Signal-to-Noise Ratio (SNR). Thus, although the capacity of the fading dirty paper channel remains unknown, this paper shows it is not far from its dirt-free counterpart. The insights from the dirty paper channel directly lead to transmission strategies for the two-user MIMO broadcast channel, where the transmitter emits a superposition of desired and undesired (dirt) signals with respect to each receiver. The performance of the lattice coding scheme is analyzed under different fading dynamics for the two users, showing that high-dimensional lattices achieve rates close to capacity. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
5. Dirty Paper Coding Based on Polar Codes and Probabilistic Shaping.
- Author
-
Sener, M. Yusuf, Bohnke, Ronald, Xu, Wen, and Kramer, Gerhard
- Abstract
A precoding technique based on polar codes and probabilistic shaping is introduced for dirty paper coding. Two variants of the precoding use multi-level shaping and sign-bit shaping in one dimension. The decoder uses multi-stage successive-cancellation list decoding with list-passing across the bit levels. The approach achieves approximately the same frame error rates as polar codes with multi-level shaping over standard additive white Gaussian noise channels at a block length of 256 symbols and with different amplitude shift keying (ASK) constellations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
6. Practical Dirty Paper Coding With Sum Codes.
- Author
-
Rege, Kiran M., Balachandran, Krishna, Kang, Joseph H., and Kemal Karakayali, M.
- Subjects
- *
CHANNEL coding , *SIGNAL quantization , *CONSTELLATION diagrams (Signal processing) , *DECODING algorithms , *INTERFERENCE (Telecommunication) - Abstract
In this paper, we present a practical method to construct dirty paper coding (DPC) schemes using sum codes. Unlike the commonly used approach to DPC where the coding scheme involves concatenation of a channel code and a quantization code, the proposed method embodies a unified approach that emulates the binning method used in the proof of the DPC result. Auxiliary bits are used to create the desired number of code vectors in each bin. Sum codes are obtained when information sequences augmented with auxiliary bits are encoded using linear block codes. Sum-code-based DPC schemes can be implemented using any linear block code, and entail a relatively small increase in decoder complexity when compared to standard communication schemes. They can also lead to significant reduction in transmit power in comparison to standard schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
7. On the Dispersions of the Gel’fand–Pinsker Channel and Dirty Paper Coding.
- Author
-
Scarlett, Jonathan
- Subjects
- *
ERROR probability , *GAUSSIAN channels , *RANDOM noise theory , *DISPERSIVE channels (Telecommunication) , *CHANNEL coding - Abstract
This paper studies the second-order coding rates for memoryless channels with a state sequence known non-causally at the encoder. In the case of finite alphabets, an achievability result is obtained using constant-composition random coding, and by using a small fraction of the block to transmit the empirical distribution of the state sequence. For error probabilities less than 0.5, it is shown that the second-order rate improves on an existing one based on independent and identically distributed random coding. In the Gaussian case (dirty paper coding) with an almost-sure power constraint, an achievability result is obtained using random coding over the surface of a sphere, and using a small fraction of the block to transmit a quantized description of the state power. It is shown that the second-order asymptotics are identical to the single-user Gaussian channel of the same input power without a state. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
8. A Unified Framework for One-Shot Achievability via the Poisson Matching Lemma.
- Author
-
Li, Cheuk Ting and Anantharam, Venkat
- Subjects
BROADCAST channels ,SOURCE code ,LOSSY data compression ,CHANNEL coding ,INFORMATION theory ,INFORMATION networks ,PAPER arts - Abstract
We introduce a fundamental lemma called the Poisson matching lemma, and apply it to prove one-shot achievability results for various settings, namely channels with state information at the encoder, lossy source coding with side information at the decoder, joint source-channel coding, broadcast channels, distributed lossy source coding, multiple access channels and channel resolvability. Our one-shot bounds improve upon the best known one-shot bounds in most of the aforementioned settings (except multiple access channels and channel resolvability, where we recover bounds comparable to the best known bounds), with shorter proofs in some settings even when compared to the conventional asymptotic approach using typicality. The Poisson matching lemma replaces both the packing and covering lemmas, greatly simplifying the error analysis. This paper extends the work of Li and El Gamal on Poisson functional representation, which mainly considered variable-length source coding settings, whereas this paper studies fixed-length settings, and is not limited to source coding, showing that the Poisson functional representation is a viable alternative to typicality for most problems in network information theory. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
9. Am I, Me, and Who's She? Liberation Psychology, Historical Memory, and Muslim women.
- Author
-
Mohr, Sarah Huxtable
- Subjects
COLLECTIVE memory ,PSYCHOLOGY ,PATRIARCHY ,ISLAMOPHOBIA ,PAPER arts ,MUSLIM women ,MISOGYNY - Abstract
One of the central underpinnings of Islamophobia is the theoretical construction of Muslim women as "Other". Going hand in hand with colonization, the overall Orientalist imaginary has depicted Muslims as misogynistic, homophobic, and gynophobic in contrast to the normal and enlightened Western European subject. Liberation psychology, as a field of decolonial work, emphasizes several main tasks one of which is the recovery of historical memory in relation to how humans see each other and the world. This paper builds on the work of recovering historical memory to emphasize the Indo-European origins of misogyny and patriarchy and the subsequent cover-up of this history as a part of the legacy of colonialism and current narratives of Islamophobia. The paper concludes that the work of psychology should include decoding reality to uncover the true nature of the origins of patriarchy, thus building new, revitalized understandings of human society. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
10. The Distortions Region of Broadcasting Correlated Gaussians and Asymmetric Data Transmission Over a Gaussian BC.
- Author
-
Bross, Shraga I.
- Subjects
DATA transmission systems ,DIGITAL communications ,GAUSSIAN channels ,BROADCAST channels ,ELECTRONIC paper ,VIDEO coding ,DIGITAL video broadcasting - Abstract
A memoryless bivariate Gaussian source is transmitted to a pair of receivers over an average-power limited bandwidth-matched Gaussian broadcast channel. Based on their observations, Receiver 1 reconstructs the first source component while Receiver 2 reconstructs the second source component both seeking to minimize the expected squared-error distortions. In addition to the source transmission digital information at a specified rate should be conveyed reliably to Receiver 1–the “stronger” receiver. Given the message rate we characterize the achievable distortions region. Specifically, there is an ${\sf SNR}$ -threshold below which Dirty Paper coding of the digital information against a linear combination of the source components is optimal. The threshold is a function of the digital information rate, the source correlation and the distortion at the “stronger” receiver. Above this threshold a Dirty Paper coding extension of the Tian-Diggavi-Shamai hybrid scheme is shown to be optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
11. Self-Powered Forward Error-Correcting Biosensor Based on Integration of Paper-Based Microfluidics and Self-Assembled Quick Response Codes.
- Author
-
Yuan, Mingquan, Liu, Keng-ku, Singamaneni, Srikanth, and Chakrabartty, Shantanu
- Abstract
This paper extends our previous work on silver-enhancement based self-assembling structures for designing reliable, self-powered biosensors with forward error correcting (FEC) capability. At the core of the proposed approach is the integration of paper-based microfluidics with quick response (QR) codes that can be optically scanned using a smart-phone. The scanned information is first decoded to obtain the location of a web-server which further processes the self-assembled QR image to determine the concentration of target analytes. The integration substrate for the proposed FEC biosensor is polyethylene and the patterning of the QR code on the substrate has been achieved using a combination of low-cost ink-jet printing and a regular ballpoint dispensing pen. A paper-based microfluidics channel has been integrated underneath the substrate for acquiring, mixing and flowing the sample to areas on the substrate where different parts of the code can self-assemble in presence of immobilized gold nanorods. In this paper we demonstrate the proof-of-concept detection using prototypes of QR encoded FEC biosensors. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
12. I, NEURON: the neuron as the collective
- Author
-
Nizami, Lance
- Published
- 2017
- Full Text
- View/download PDF
13. Research on a soft saturation nonlinear SSVEP signal feature extraction algorithm.
- Author
-
Liu, Bo, Gao, Hongwei, Jiang, Yueqiu, and Wu, Jiaxuan
- Abstract
Brain–computer interfaces (BCIs) based on steady-state visual evoked potentials (SSVEP) have received widespread attention due to their high information transmission rate, high accuracy, and rich instruction set. However, the performance of its identification methods strongly depends on the amount of calibration data for within-subject classification. Some studies use deep learning (DL) algorithms for inter-subject classification, which can reduce the calculation process, but there is still much room for improvement in performance compared with intra-subject classification. To solve these problems, an efficient SSVEP signal recognition deep learning network model e-SSVEPNet based on the soft saturation nonlinear module is proposed in this paper. The soft saturation nonlinear module uses a similar exponential calculation method for output when it is less than zero, improving robustness to noise. Under the conditions of the SSVEP data set, two sliding time window lengths (1 s and 0.5 s), and three training data sizes, this paper evaluates the proposed network model and compares it with other traditional and deep learning model baseline methods. The experimental results of the nonlinear module were classified and compared. A large number of experimental results show that the proposed network has the highest average accuracy of intra-subject classification on the SSVEP data set, improves the performance of SSVEP signal classification and recognition, and has higher decoding accuracy under short signals, so it has huge potential ability to realize high-speed SSVEP-based for BCI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Han–Kobayashi and Dirty-Paper Coding for Superchannel Optical Communications.
- Author
-
Koike-Akino, Toshiaki, Kojima, Keisuke, Millar, David S., Parsons, Kieran, Kametani, Soichiro, Sugihara, Takashi, Yoshida, Tsuyoshi, Ishida, Kazuyuki, Miyata, Yoshikuni, Matsumoto, Wataru, and Mizuochi, Takashi
- Abstract
Superchannel transmission is a candidate to realize Tb/s-class high-speed optical communications. In order to achieve higher spectrum efficiency, the channel spacing shall be as narrow as possible. However, densely allocated channels can cause non-negligible inter-channel interference (ICI) especially when the channel spacing is close to or below the Nyquist bandwidth. In this paper, we consider joint decoding to cancel the ICI in dense superchannel transmission. To further improve the spectrum efficiency, we propose the use of Han–Kobayashi superposition coding. In addition, for the case when neighboring subchannel transmitters can share data, we introduce dirty-paper coding for pre-cancelation of the ICI. We analytically evaluate the potential gains of these methods when ICI is present for sub-Nyquist channel spacing. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
15. Hill Matrix and Radix-64 Bit Algorithm to Preserve Data Confidentiality.
- Author
-
Arshad, Ali, Nadeem, Muhammad, Riaz, Saman, Zahra, Syeda Wajiha, Dutta, Ashit Kumar, Alzaid, Zaid, Alabdan, Rana, Almutairi, Badr, and Almotairi, Sultan
- Subjects
DATA encryption ,DATA security ,DATA protection ,ALGORITHMS ,CONFIDENTIAL communications - Abstract
There are many cloud data security techniques and algorithms available that can be used to detect attacks on cloud data, but these techniques and algorithms cannot be used to protect data from an attacker. Cloud cryptography is the best way to transmit data in a secure and reliable format. Various researchers have developed various mechanisms to transfer data securely, which can convert data from readable to unreadable, but these algorithms are not sufficient to provide complete data security. Each algorithm has some data security issues. If some effective data protection techniques are used, the attacker will not be able to decipher the encrypted data, and even if the attacker tries to tamper with the data, the attacker will not have access to the original data. In this paper, various data security techniques are developed, which can be used to protect the data from attackers completely. First, a customized American Standard Code for Information Interchange (ASCII) table is developed. The value of each Index is defined in a customized ASCII table. When an attacker tries to decrypt the data, the attacker always tries to apply the predefined ASCII table on the Ciphertext, which in a way, can be helpful for the attacker to decrypt the data. After that, a radix 64-bit encryption mechanism is used, with the help of which the number of cipher data is doubled from the original data. When the number of cipher values is double the original data, the attacker tries to decrypt each value. Instead of getting the original data, the attacker gets such data that has no relation to the original data. After that, a Hill Matrix algorithm is created, with the help of which a key is generated that is used in the exact plain text for which it is created, and this Key cannot be used in any other plain text. The boundaries of each Hill text work up to that text. The techniques used in this paper are compared with those used in various papers and discussed that how far the current algorithm is better than all other algorithms. Then, the Kasiski test is used to verify the validity of the proposed algorithm and found that, if the proposed algorithm is used for data encryption, so an attacker cannot break the proposed algorithm security using any technique or algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
16. Dictionary form in decoding, encoding and retention: Further insights.
- Author
-
DZIEMIANKO, ANNA
- Subjects
ELECTRONIC dictionaries ,FOREIGN language education ,PHONOLOGICAL encoding ,PHONOLOGICAL decoding ,ENCYCLOPEDIAS & dictionaries ,COLLOCATION (Linguistics) - Abstract
The aim of the paper is to investigate the role of dictionary form (paper versus electronic) in language reception, production and retention. The body of existing research does not give a clear answer as to which dictionary medium benefits users more. Divergent findings from many studies into the topic might stem from differences in research methodology (including the various tasks, participants and dictionaries used by different authors). Even a series of studies conducted by one researcher (Dziemianko, 2010, 2011, 2012b) leads to contradictory conclusions, possibly because of the use of paper and electronic versions of existing dictionaries, and the resulting problem with isolating dictionary form as a factor. To be able to argue with confidence that the results obtained follow from different dictionary formats, rather than presentation issues, research methodology should be improved. To successfully generalize about the significance of the medium for decoding, encoding and learning, the current study replicates previous research, but the presentation of lexicographic data on paper and on screen is now balanced, and the paper/electronic opposition is operationalized more appropriately. A real online dictionary and its paper-based counterpart composed of printouts of screen displays were used in the experiment in which the meaning of English nouns and phrases was explained, and collocations were completed with missing prepositions. A delayed post-test checked the retention of the meanings and collocations. The results indicate that dictionary medium does not play a statistically significant role in reception and production, but it considerably affects retention. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
17. Practical Dirty Paper Coding Schemes Using One Error Correction Code With Syndrome.
- Author
-
Kim, Taehyun, Kwon, Kyunghoon, and Heo, Jun
- Abstract
Dirty paper coding (DPC) offers an information-theoretic result for pre-cancellation of known interference at the transmitter. In this letter, we propose practical DPC schemes that use only one error correction code. Our designs focus on practical use from the viewpoint of complexity. For fair comparison with previous schemes, we compute the complexity of proposed schemes by the number of operations used. Simulation results show that compared to previous DPC schemes, the proposed schemes require lower transmission power to maintain the bit error rate to be within 10^-5 . [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
18. Fast Detection Fusion Network (FDFnet): An End to End Object Detection Framework Based on Heterogeneous Image Fusion for Power Facility Inspection.
- Author
-
Xu, Xiang, Liu, Gang, Bavirisetti, Durga Prasad, Zhang, Xiangbo, Sun, Boyang, and Xiao, Gang
- Subjects
OBJECT recognition (Computer vision) ,IMAGE fusion ,FEATURE extraction ,IMAGING systems ,COMPUTATIONAL complexity ,FACILITIES - Abstract
Visual surveillance for autonomous power facility inspection is considered to bethe prominent field of study in the power industry. This research field completely focuses on either object detection or image fusion which lacks the overall consideration. By considering this, a single end-to-end object detection method by incorporating the image fusion named Fast Detection Fusion Network (FDFNet) is proposed in this paper to output qualitative fused images with detection results. The parameters in the FDFnet are greatly reduced by sharing the feature extraction network between image fusion and object detection tasks, due to which a huge reduction in computational complexity is achieved. On this basis, the object detection algorithm performance on various types of power facility images is compared and analyzed. For experimentation purposes, an IR (infrared) and VIS (visible) image acquisition system has also been designed. In addition, a dataset named CVPower with different sets of images for power facility fusion detection is constructed for this research field. Experimental results demonstrate that the proposed method can achieve the mAP of not less than 70%, process 2 frames per second, and produce high qualitative fused images. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
19. On Polar Coding for Side Information Channels.
- Author
-
Beilin, Barak and Burshtein, David
- Subjects
- *
CHANNEL coding , *SOURCE code , *BINARY codes - Abstract
We propose a successive cancellation list (SCL) encoding and decoding scheme for the Gelfand Pinsker (GP) problem based on the known nested polar coding scheme. It applies SCL encoding for the source coding part, and SCL decoding with a properly defined CRC for the channel coding part. The scheme shows improved performance compared to the existing method. A known issue with nested polar codes for binary dirty paper is the existence of frozen channel code bits that are not frozen in the source code. These bits need to be retransmitted in a second phase of the scheme, thus reducing the rate and increasing the required blocklength. We provide an improved bound on the size of this set, and on its scaling with respect to the blocklength, when the Bhattacharyya parameter of the test channel used for source coding is sufficiently large, or the Bhattacharyya parameter of the channel seen at the decoder is sufficiently small. The result is formulated for an arbitrary binary-input memoryless GP problem, since unlike the previous results, it does not require degradedness of the two channels mentioned above. Finally, we present simulation results for binary dirty paper and noisy write once memory codes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. The Effect of Local Decodability Constraints on Variable-Length Compression.
- Author
-
Pananjady, Ashwin and Courtade, Thomas A.
- Subjects
CODING theory ,BINARY sequences ,TELECOMMUNICATION ,COMPUTATIONAL complexity ,DATA structures - Abstract
We consider a variable-length source coding problem subject to local decodability constraints. In particular, we investigate the blocklength scaling behavior attainable by encodings of $r$ -sparse binary sequences, under the constraint that any source bit can be correctly decoded upon probing at most $d$ codeword bits. We consider both adaptive and non-adaptive access models, and derive upper and lower bounds that often coincide up to constant factors. Such a characterization for the fixed-blocklength analog of our problem, known as the bit probe complexity of static membership, remains unknown despite considerable attention from researchers over the last few decades. We also show that locally decodable schemes for sparse sequences are able to decode 0s (frequent source symbols) of the source with far fewer probes on average than they can decode 1s (infrequent source symbols), thus rigorizing the notion that infrequent symbols require high probe complexity, even on average. Connections to the fixed-blocklength model and to communication complexity are also briefly discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
21. Overview and Efficiency of Decoder-Side Depth Estimation in MPEG Immersive Video.
- Author
-
Mieloch, Dawid, Garus, Patrick, Milovanovic, Marta, Jung, Joel, Jeong, Jun Young, Ravi, Smitha Lingadahalli, and Salahieh, Basel
- Subjects
VIDEO coding ,MACHINE learning ,VIDEOS ,VIDEO codecs ,BINARY sequences ,VIDEO processing - Abstract
This paper presents the overview and rationale behind the Decoder-Side Depth Estimation (DSDE) mode of the MPEG Immersive Video (MIV) standard, using the Geometry Absent profile, for efficient compression of immersive multiview video. A MIV bitstream generated by an encoder operating in the DSDE mode does not include depth maps. It only contains the information required to reconstruct them in the client or in the cloud: decoded views and metadata. The paper explains the technical details and techniques supported by this novel MIV DSDE mode. The description additionally includes the specification on Geometry Assistance Supplemental Enhancement Information which helps to reduce the complexity of depth estimation, when performed in the cloud or at the decoder side. The depth estimation in MIV is a non-normative part of the decoding process, therefore, any method can be used to compute the depth maps. This paper lists a set of requirements for depth estimation, induced by the specific characteristics of the DSDE. The depth estimation reference software, continuously and collaboratively developed with MIV to meet these requirements, is presented in this paper. Several original experimental results are presented. The efficiency of the DSDE is compared to two MIV profiles. The combined non-transmission of depth maps and efficient coding of textures enabled by the DSDE leads to efficient compression and rendering quality improvement compared to the usual encoder-side depth estimation. Moreover, results of the first evaluation of state-of-the-art multiview depth estimators in the DSDE context, including machine learning techniques, are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
22. Variable-Length Coding With Shared Incremental Redundancy: Design Methods and Examples.
- Author
-
Wang, Haobo, Ranganathan, Sudarsan V. S., and Wesel, Richard D.
- Subjects
VIDEO coding ,LOW density parity check codes ,PARALLEL processing ,REDUNDANCY in engineering - Abstract
Variable-length (VL) coding with feedback is a commonly used technique that can approach point-to-point Shannon channel capacity with a significantly shorter average codeword length than fixed-length coding without feedback. This paper uses the inter-frame coding of Zeineddine and Mansour, originally introduced to address varying channel-state conditions in broadcast wireless communication, to approach capacity on point-to-point channels using VL codes without feedback. The per-symbol complexity is comparable to decoding the VL code with feedback (plus the additional complexity of a small peeling decoder amortized over many VL codes) and presents the opportunity for encoders and decoders that utilize massive parallel processing, where each VL decoder can process simultaneously. This paper provides an analytical framework and a design process for the degree distribution of the inter-frame code that allows the feedback-free system to achieve 96% or more of the throughput of the original VL code with feedback. As examples of VL codes, we consider non-binary (NB) low-density parity-check (LDPC), binary LDPC, and convolutional VL codes. The NB-LDPC VL code with an 8-bit CRC and an average codeword length of 336 bits achieves 85% of capacity with four rounds of ACK/NACK feedback. The proposed scheme using shared incremental redundancy without feedback achieves 97% of that performance or 83% of the channel capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
23. Data Dissemination Using Instantly Decodable Binary Codes in Fog-Radio Access Networks.
- Author
-
Douik, Ahmed and Sorour, Sameh
- Subjects
SELECTIVE dissemination of information ,BINARY codes ,RADIO access networks ,MOBILE communication systems ,LINEAR network coding ,DECODING algorithms - Abstract
This paper considers a device-to-device (D2D) fog-radio access network wherein a set of users are required to store/receive a set of files. The D2D devices are connected to a subset of the cloud data centers and thus possess a subset of the data. This paper is interested in reducing the total time of communication, i.e., the completion time, required to disseminate all files among all devices using instantly decodable network coding (IDNC). Unlike previous studies that assume a fully connected communication network, this paper tackles the more realistic scenario of a partially connected network in which devices are not all in the transmission range of one another. The joint optimization of selecting the transmitting device(s) and the file combination(s) is first formulated, and its intractability is exhibited. The completion time is approximated using the celebrated decoding delay approach by deriving the relationship between the quantities in a partially connected network. The paper introduces the cooperation graph and demonstrates that the problem is equivalent to a maximum weight clique problem over the newly designed graph. Extensive simulations reveal that the proposed solution provides noticeable performance enhancement and outperforms previously proposed IDNC-based schemes. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
24. Decoding of MDP Convolutional Codes over the Erasure Channel under Linear Systems Point of View.
- Author
-
García-Planas, Maria Isabel and Um, Laurence E.
- Subjects
LINEAR dynamical systems ,BLOCK codes ,TIME complexity ,SYSTEMS theory ,LINEAR systems ,CONTROLLABILITY in systems engineering - Abstract
This paper attempts to highlight the decoding capabilities of MDP convolutional codes over the erasure channel by defining them as discrete linear dynamical systems, with which the controllability property and the observability characteristics of linear system theory can be applied, in particular those of output observability, easily described using matrix language. Those are viewed against the decoding capabilities of MDS block codes over the same channel. Not only is the time complexity better but the decoding capabilities are also increased with this approach because convolutional codes are more flexible in handling variable-length data streams than block codes, where they are fixed-length and less adaptable to varying data lengths without padding or other adjustments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Decoding, reading and writing: the double helix theory of teaching.
- Author
-
Wyse, Dominic and Hacking, Charlotte
- Abstract
This paper presents a new theory and model of the teaching of decoding, reading and writing. The first part of the paper reviews a selection of influential models of learning to read and write that to varying degrees have been used as the basis for approaches to teaching, including the
Simple View of Reading . As well as noting some strengths of the models in relation to children's learning, limitations are identified in terms of their applicability as models of teaching. The second part of the paper presents seven components that are central to teaching reading and writing derived from social, cultural and cognitive research and theory. Explanations for the relevance of the components are offered, and seminal and more recent research that underpin them summarised. The final part of the paper introduces a new theory and model of teaching,The Double Helix of Reading and Writing . It is argued that this model provides a rationale for a balanced approach to teaching, and an alternative to synthetic phonics. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
26. 3-D NAND Flash Value-Aware SSD: Error-Tolerant SSD Without ECCs for Image Recognition.
- Author
-
Deguchi, Yoshiaki, Nakamura, Toshiki, Hayakawa, Atsuna, and Takeuchi, Ken
- Subjects
IMAGE recognition (Computer vision) ,SOLID state drives ,FLASH memory ,ERROR-correcting codes ,THRESHOLD voltage ,ERROR rates ,ARTIFICIAL neural networks - Abstract
This paper proposes Value-Aware solid-state drive (SSD) with fast access speed and low power consumption by eliminating error-correcting code (ECC). Value-Aware SSD utilizes the error tolerance of image recognition application using a deep neural network (DNN) to enhance reliability. In a previous paper, which proposes Value-Aware SSD, fast ECC decoder is implemented and SSD is evaluated with the 32-bit floating-point data format. On the other hand, in this paper, the proposed Value-Aware SSD is analyzed with 32-bit and 8-bit fixed-point data format and achieves the higher reliability even without ECC by newly proposed two techniques, Critical Bit Error Reduction (CBER) and Middle & Lower Page Error Reduction (M&L-PER). CBER and M&L-PER are proposed for 32-bit and 8-bit data format of application, respectively. These techniques modulate the threshold voltage ($V_{\mathrm {TH}}$) distribution of memory cells by recognizing the importance of each stored bit. By proposed CBER, as much as 15% bit error rate (BER) of NAND flash is allowed while the application provides high image recognition accuracy. Even if bit precision is truncated to 8 bit, 3.9% BER is accepted by M&L-PER. The fast read access and low power consumption are realized because ECC is not required. Finally, this paper analyzes the Value-Aware techniques with 3-D multi-level cell (MLC) NAND flash to compare the effects for 3D-MLC and triple-level cell (TLC) NAND flash. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
27. A Texture-Hidden Anti-Counterfeiting QR Code and Authentication Method.
- Author
-
Wang, Tianyu, Zheng, Hong, You, Changhui, and Ju, Jianping
- Subjects
TWO-dimensional bar codes ,GAUSSIAN distribution - Abstract
This paper designs a texture-hidden QR code to prevent the illegal copying of a QR code due to its lack of anti-counterfeiting ability. Combining random texture patterns and a refined QR code, the code is not only capable of regular coding but also has a strong anti-copying capability. Based on the proposed code, a quality assessment algorithm (MAF) and a dual feature detection algorithm (DFDA) are also proposed. The MAF is compared with several current algorithms without reference and achieves a 95% and 96% accuracy for blur type and blur degree, respectively. The DFDA is compared with various texture and corner methods and achieves an accuracy, precision, and recall of up to 100%, and also performs well on attacked datasets with reduction and cut. Experiments on self-built datasets show that the code designed in this paper has excellent feasibility and anti-counterfeiting performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
28. Adaptive Graph Auto-Encoder for General Data Clustering.
- Author
-
Li, Xuelong, Zhang, Hongyuan, and Zhang, Rui
- Subjects
WEIGHTED graphs ,TASK analysis ,FUZZY clustering technique ,TANNER graphs - Abstract
Graph-based clustering plays an important role in the clustering area. Recent studies about graph neural networks (GNN) have achieved impressive success on graph-type data. However, in general clustering tasks, the graph structure of data does not exist such that GNN can not be applied to clustering directly and the strategy to construct a graph is crucial for performance. Therefore, how to extend GNN into general clustering tasks is an attractive problem. In this paper, we propose a graph auto-encoder for general data clustering, AdaGAE, which constructs the graph adaptively according to the generative perspective of graphs. The adaptive process is designed to induce the model to exploit the high-level information behind data and utilize the non-euclidean structure sufficiently. Importantly, we find that the simple update of the graph will result in severe degeneration, which can be concluded as better reconstruction means worse update. We provide rigorous analysis theoretically and empirically. Then we further design a novel mechanism to avoid the collapse. Via extending the generative graph models to general type data, a graph auto-encoder with a novel decoder is devised and the weighted graphs can be also applied to GNN. AdaGAE performs well and stably in different scale and type datasets. Besides, it is insensitive to the initialization of parameters and requires no pretraining. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
29. Research on the Performance of an End-to-End Intelligent Receiver with Reduced Transmitter Data.
- Author
-
Wang, Mingbo, Wang, Anyi, Zhang, Yuzhi, and Chai, Jing
- Subjects
TRANSMITTERS (Communication) ,TELECOMMUNICATION systems ,DATA transmission systems - Abstract
A large amount of data transmission is one of the challenges faced by communication systems. In this paper, we revisit the intelligent receiver consisting of a neural network, and we find that the intelligent receiver can reduce the data at the transmitting end while improving the decoding accuracy. Specifically, we first construct a smart receiver model, and then design two ways to reduce the data at the transmitter side, namely, end-of-transmitter data trimming and equal-interval data trimming, to investigate the decoding performance of the receiver under the different trimming methods. The simulation results show that the receiver still has an accurate decoding performance with a small amount of trimming at the end of the transmitter data, while the decoding performance of the smart receiver is better than that of the conventional receiver with complete data when the data is trimmed at equal intervals. Moreover, the receiver with equally-spaced data cropping has a lower BER when the data at the transmitter side is reduced by the same data length. This paper provides a new solution to reduce the amount of data at the transmitter side. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
30. Securing Files Using Hybrid Cryptography.
- Author
-
Padmavathi, V., Kumar, M. Jayanth, and Reddy, M. Saikiran
- Subjects
DATA security ,PUBLIC key cryptography ,DATA encryption ,CRYPTOGRAPHY ,CONFIDENTIAL communications - Abstract
Data Security in terms of file encryption is very crucial, every file nowadays is important as it contains either personal information or industrial data. Hybrid encryption is achieved through data transfer using a combination method of symmetric and asymmetric encryption algorithms. Users can safely send files through hybrid encryption. Asymmetric encryption slows down the encryption process when used alone so symmetric encryption is used for file encryption, advantages of both forms of encryption are utilized. The result is another layer of security with no extra burden to system performance. The idea of hybrid encryption is very simple. Instead of using just AES only to encrypt the file, we use AES to encrypt the file. Then to maintain the secrecy of the key, we encrypt the key using RSA. This discussed paper is a completely different approach which is used for securely storing files. This proposed scheme will also make sure the newly proposed model to have confidentiality and integrity mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
31. Fundamental Limits of Distributed Linear Encoding.
- Author
-
Khooshemehr, Nastaran Abadi and Maddah-Ali, Mohammad Ali
- Subjects
CODING theory ,FINITE fields ,ENCODING ,LINEAR systems ,CHANNEL coding ,LINEAR codes - Abstract
In general coding theory, we often assume that error is observed in transferring or storing encoded symbols, while the process of encoding itself is error-free. Motivated by recent applications of coding theory, in this paper, we consider the case where the process of encoding is distributed and prone to error. We introduce the problem of distributed encoding, comprised of a set of $K \in \mathbb {N}$ isolated source nodes and $N \in \mathbb {N}$ encoding nodes. Each source node has one symbol from a finite field, which is sent to each of the encoding nodes. Each encoding node stores an encoded symbol from the same field, as a function of the received symbols. However, some of the source nodes are controlled by the adversary and may send different symbols to different encoding nodes. Depending on the number of the adversarial nodes, denoted by $\beta \in \mathbb {N}$ , and the cardinality of the set of symbols that each one generates, denoted by $v \in \mathbb {N}$ , the process of decoding from the encoded symbols could be impossible. Assume that a decoder connects to an arbitrary subset of $t \in \mathbb {N}$ encoding nodes and wants to decode the symbols of the honest nodes correctly, without necessarily identifying the sets of honest and adversarial nodes. An important characteristic of a distributed encoding system is $t^{*} \in \mathbb {N}$ , the minimum of such $t$ , which is a function of $K$ , $N$ , $\beta $ , and $v$. In this paper, we study the distributed linear encoding system, i.e. one in which the encoding nodes use linear coding. We show that $t^{*}_{\textrm {Linear}}=K+2\beta (v-1)$ , if $N\ge K+2\beta (v-1)$ , and $t^{*}_{\textrm {Linear}}=N$ , if $N\le K+2\beta (v-1)$. In order to achieve $t^{*}_{\textrm {Linear}}$ , we use random linear coding and show that in any feasible solution that the decoder finds, the messages of the honest nodes are decoded correctly. In order to prove the converse of the fundamental limit, we show that when the adversary behaves in a particular way, it can always confuse the decoder between two feasible solutions that differ in the message of at least one honest node. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
32. Reverse Low-Power Broadside Tests.
- Author
-
Pomeranz, Irith
- Subjects
LOGIC circuits ,INTEGRATED circuit design ,LOGIC circuits testing - Abstract
This paper defines a new type of a low-power broadside test, called a reverse low-power broadside test, whose application requires design-for-testability logic. The unique feature of a reverse low-power broadside test is that it duplicates the switching activity during the second functional capture cycle of a given low-power broadside test, except that signal-transitions are reversed. Thus, the switching activity of a reverse low-power broadside test duplicates that of a low-power broadside test in every subcircuit and on every line individually. In addition, the reversed test detects different faults, and can thus increase the fault coverage of a low-power broadside test set. This paper studies the ability of reverse low-power broadside tests to increase the transition fault coverage in benchmark circuits considering functional broadside tests as well as low-power broadside tests that are not functional. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
33. Optimum Overflow Thresholds in Variable-Length Source Coding Allowing Non-Vanishing Error Probability.
- Author
-
Nomura, Ryo and Yagi, Hideki
- Subjects
SOURCE code ,CODING theory ,ERROR probability ,CHANNEL coding ,BOOLEAN functions ,PROBABILITY theory - Abstract
The variable-length source coding problem allowing the error probability up to some constant is considered for general sources. In this problem, the optimum mean codeword length of variable-length codes has already been determined. On the other hand, in this paper, we focus on the overflow (or excess codeword length) probability instead of the mean codeword length. The infimum of overflow thresholds under the constraint that both of the error probability and the overflow probability are smaller than or equal to some constant is called the optimum overflow threshold. In this paper, we first derive finite-length upper and lower bounds on these probabilities so as to analyze the optimum overflow thresholds. Then, by using these bounds, we determine the general formula of the optimum overflow thresholds in both of the first-order and second-order forms. Next, we consider another expression of the derived general formula so as to reveal the relationship with the optimum coding rate in the fixed-length source coding problem. Finally, we apply the general formula derived in this paper to the case of stationary memoryless sources. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
34. Almost Optimal Construction of Functional Batch Codes Using Extended Simplex Codes.
- Author
-
Yohananov, Lev and Yaakobi, Eitan
- Subjects
LINEAR codes ,INFORMATION retrieval ,LOGICAL prediction - Abstract
A functional $k$ -batch code of dimension $s$ consists of $n$ servers storing linear combinations of $s$ linearly independent information bits. Any multiset request of size $k$ of linear combinations (or requests) of the information bits can be recovered by $k$ disjoint subsets of the servers. The goal under this paradigm is to find the minimum number of servers for given values of $s$ and $k$. A recent conjecture states that for any $k=2^{s-1}$ requests the optimal solution requires $2^{s}-1$ servers. This conjecture is verified for $s \leqslant 5$ but previous work could only show that codes with $n=2^{s}-1$ servers can support a solution for $k=2^{s-2} + 2^{s-4} + \left \lfloor{ \frac { 2^{s/2}}{\sqrt {24}} }\right \rfloor $ requests. This paper reduces this gap and shows the existence of codes for $k=\lfloor \frac {5}{6}2^{s-1} \rfloor - s$ requests with the same number of servers. Another construction in the paper provides a code with $n=2^{s+1}-2$ servers and $k=2^{s}$ requests, which is an optimal result. These constructions are mainly based on extended Simplex codes and equivalently provide constructions for parallel Random I/O (RIO) codes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Delay Reduction in Multi-Hop Device-to-Device Communication Using Network Coding.
- Author
-
Douik, Ahmed, Sorour, Sameh, Al-Naffouri, Tareq Y., Yang, Hong-Chuan, and Alouini, Mohamed-Slim
- Abstract
This paper considers the problem of reducing the broadcast decoding delay of wireless networks using instantly decodable network coding- based device-to-device communications. In contrast with the previous works that assume a fully connected network, this paper investigates a partially connected configuration in which multiple devices are allowed to transmit simultaneously. To that end, different events occurring at each device are identified so as to derive an expression for the probability distribution of the decoding delay. Afterward, the joint optimization problem over the set of transmitting devices and packet combination of each is formulated. The optimal solution of the joint optimization problem is derived using a graph-theoretic approach by introducing the cooperation graph in which each vertex represents a transmitting device with a weight translating its contribution to the network. This paper solves the problem by reformulating it as a maximum weight clique problem which can efficiently be solved. Numerical results suggest that the proposed solution outperforms state-of-the-art schemes and provides significant gain, especially for poorly connected networks. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
36. List Decoding of Insertions and Deletions.
- Author
-
Wachter-Zeh, Antonia
- Subjects
POLYNOMIAL time algorithms ,ALGEBRA ,ALGORITHMS ,DECODERS (Electronics) ,POLYNOMIALS - Abstract
List decoding of insertions and deletions in the Levenshtein metric is considered. The Levenshtein distance between two sequences is the minimum number of insertions and deletions needed to turn one of the sequences into the other. In this paper, a Johnson-like upper bound on the maximum list size when list decoding in the Levenshtein metric is derived. This bound depends only on the length and minimum Levenshtein distance of the code, the length of the received word, and the alphabet size. It shows that polynomial-time list decoding beyond half the Levenshtein distance is possible for many parameters. Further, we also prove a lower bound on list decoding of deletions with the well-known binary Varshamov–Tenengolts codes, which shows that the maximum list size grows exponentially with the number of deletions. Finally, an efficient list decoding algorithm for two insertions/deletions with VT codes is given. This decoder can be modified to a polynomial-time list decoder of any constant number of insertions/deletions. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
37. A Fully Integrated Energy-Efficient H.265/HEVC Decoder With eDRAM for Wearable Devices.
- Author
-
Tikekar, Mehul, Sze, Vivienne, and Chandrakasan, Anantha P.
- Subjects
VIDEO coding ,SMARTWATCHES ,RANDOM access memory ,VIRTUAL reality ,DECODERS (Electronics) - Abstract
This paper proposes a fully integrated H.265/high efficiency video coding (HEVC) video decoder that supports real-time video playback within the 50-mW power budget of wearable devices, such as smart watches and virtual reality (VR) headsets. Specifically, this paper focuses on reducing data movement to and from off-chip memory as it dominates energy consumption, consuming 2.8–6 times more energy than processing in most video decoders. Embedded dynamic random access memory (eDRAM) is used for main memory, and several techniques are proposed to reduce the power consumption of the eDRAM itself: 1) lossless compression is used to store reference frames in two times fewer eDRAM macros, reducing refresh power by 33%; 2) eDRAM macros are powered up on-demand to further reduce refresh power by 33%; and 3) syntax elements are distributed to four decoder cores in a partially compressed form to reduce decoupling buffer power by four times. These approaches reduce eDRAM power by two times in a fully integrated H.265/HEVC decoder with the lowest reported system power. The test chip containing 10.5 MB of eDRAM requires no external memory and consumes 24.9–30.6 mW for decoding $1920\times 1080$ video at 24–50 frames/s. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
38. Improved Bounds on Lossless Source Coding and Guessing Moments via Rényi Measures.
- Author
-
Sason, Igal and Verdu, Sergio
- Subjects
CODING theory ,CRYPTOGRAPHY ,SEQUENTIAL decoding ,GAUSSIAN function ,STATISTICAL hypothesis testing - Abstract
This paper provides upper and lower bounds on the optimal guessing moments of a random variable taking values on a finite set when side information may be available. These moments quantify the number of guesses required for correctly identifying the unknown object and, similarly to Arikan’s bounds, they are expressed in terms of the Arimoto-Rényi conditional entropy. Although Arikan’s bounds are asymptotically tight, the improvement of the bounds in this paper is significant in the non-asymptotic regime. Relationships between moments of the optimal guessing function and the MAP error probability are also established, characterizing the exact locus of their attainable values. The bounds on optimal guessing moments serve to improve non-asymptotic bounds on the cumulant generating function of the codeword lengths for fixed-to-variable optimal lossless source coding without prefix constraints. Non-asymptotic bounds on the reliability function of discrete memoryless sources are derived as well. Relying on these techniques, lower bounds on the cumulant generating function of the codeword lengths are derived, by means of the smooth Rényi entropy, for source codes that allow decoding errors. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
39. Polar Channel Coding Schemes for Two-Dimensional Magnetic Recording Systems.
- Author
-
Saito, Hidetoshi
- Subjects
MAGNETIC recorders & recording ,CHANNEL coding ,ERROR correction (Information theory) ,MODULATION coding ,DECODING algorithms - Abstract
This paper proposes new two-dimensional magnetic recording (TDMR) systems using polar channel coding as practical error correction coding. It is known that the time and space complexities of the encoding/decoding algorithms based on polar channel coding are \mathcal O(N \log N) , where $N$ is the codeword (block) length. If we compare the error-correction performance of a polar code with that of a low-density parity-check (LDPC) code in the same rate, it is known that the polar code has a longer length and its decoder still has a lower implementation complexity than the LDPC decoder. Therefore, relatively low-complexity coding schemes are preferable for any TDMR systems under high rates and relatively long codeword lengths. In this paper, the proposed TDMR system serially concatenates a two-dimensional (2-D) modulation code with one-dimensional (1-D) polar codes in each down-track direction. These element polar codes are designed on the fundamentals of the channel polarization theory, which are applied for channels with memory. Actually, it evaluates the performance of the signal processing scheme with concatenated coding and generalized partial response equalization for the proposed TDMR system using bit-patterned media by computer simulations. As a result, it shows that the block error rate performance of the proposed TDMR system with the 2-D modulation and polar channel coding schemes is superior to that of the 1-D system with the conventional 1-D high rate modulation and LDPC coding schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
40. Decoding kinematic information from beta-band motor rhythms of speech motor cortex: a methodological/analytic approach using concurrent speech movement tracking and magnetoencephalography.
- Author
-
Anastasopoulou, Ioanna, Cheyne, Douglas Owen, van Lieshout, Pascal, and Johnson, Blake Warren
- Subjects
MOTOR cortex ,SPEECH ,SENSORIMOTOR cortex ,MAGNETOENCEPHALOGRAPHY ,SCANNING systems ,ROBUST control ,TRANSCRANIAL magnetic stimulation ,NEUROBIOLOGY - Abstract
Introduction: Articulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until now, however, it has generally not been feasible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Methods: Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), which is technically compatible with magnetoencephalography (MEG) brain scanning systems. In the present paper we describe our methodological and analytic approach for extracting brain motor activities related to key kinematic and coordination event parameters derived from time-registered MASK tracking measurements. Data were collected from 10 healthy adults with tracking coils on the tongue, lips, and jaw. Analyses targeted the gestural landmarks of reiterated utterances/ipa/and/api/, produced at normal and faster rates. Results: The results show that (1) Speech sensorimotor cortex can be reliably located in peri-rolandic regions of the left hemisphere; (2) mu (8-12 Hz) and beta band (13-30 Hz) neuromotor oscillations are present in the speech signals and contain information structures that are independent of those present in higherfrequency bands; and (3) hypotheses concerning the information content of speech motor rhythms can be systematically evaluated with multivariate pattern analytic techniques. Discussion: These results show that MASK provides the capability, for deriving subject-specific articulatory parameters, based on well-established and robust motor control parameters, in the same experimental setup as the MEG brain recordings and in temporal and spatial co-register with the brain data. The analytic approach described here provides new capabilities for testing hypotheses concerning the types of kinematic information that are encoded and processed within specific components of the speech neuromotor system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. An Investigation of the Cross-Language Transfer of Reading Skills: Evidence from a Study in Nigerian Government Primary Schools.
- Author
-
Humble, Steve, Dixon, Pauline, Gittins, Louise, and Counihan, Chris
- Subjects
READING ,PRIMARY schools ,SCHOOL day ,ENGLISH letters ,LANGUAGE policy ,PHONOLOGICAL awareness ,LANGUAGE transfer (Language learning) - Abstract
This paper investigates the linguistic interdependence of Grade 3 children studying in government primary schools in northern Nigeria who are learning to read in Hausa (L1) and English (L2) simultaneously. There are few studies in the African context that consider linguistic interdependence and the bidirectional influences of literacy skills in multilingual contexts. A total of 2328 Grade 3 children were tested on their Hausa and English letter sound knowledge (phonemes) and reading decoding skills (word) after participating in a two-year English structured reading intervention programme as part of their school day. In Grade 4, these children will become English immersion learners, with English becoming the medium of instruction. Carrying out bivariate correlations, we find a large and strongly positively significant correlation between L1 and L2 test scores. Concerning bidirectionality, a feedback path model illustrates that the L1 word score predicts the L2 word score and vice versa. Multi-level modelling is then used to consider the variation in test scores. Almost two thirds of the variation in the word score is attributable to the pupil level and one third to the school level. The Hausa word score is significantly predicted through Hausa sound and English word score. English word score is significantly predicted through Hausa word and English sound score. The findings have implications for language policy and classroom instruction, showing the importance of cross-language transfer between reading skills. The overall results support bidirectionality and linguistic interdependence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Optimal Construction for Decoding 2D Convolutional Codes over an Erasure Channel.
- Author
-
Pinto, Raquel, Spreafico, Marcos, and Vela, Carlos
- Subjects
BIBLIOGRAPHY ,BLOCK codes ,REED-Solomon codes - Abstract
In general, the problem of building optimal convolutional codes under a certain criteria is hard, especially when size field restrictions are applied. In this paper, we confront the challenge of constructing an optimal 2D convolutional code when communicating over an erasure channel. We propose a general construction method for these codes. Specifically, we provide an optimal construction where the decoding method presented in the bibliography is considered. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. Polar Codes for Quantum Reading.
- Author
-
Pereira, Francisco Revson F. and Mancini, Stefano
- Subjects
QUANTUM states ,SQUARE root ,READING ,ERROR probability ,OPEN-ended questions - Abstract
Quantum readout provides a general framework for formulating statistical discrimination of quantum channels. Several paths have been taken for such this problem. However, there is much to be done in the avenue of optimizing channel discrimination using classical codes. At least two open questions can be pointed out: how to construct low complexity encoding schemes that are interesting for channel discrimination and, more importantly, how to develop capacity-achieving protocols. This paper aims at presenting a solution to these questions using polar codes. Firstly, we characterize the information rate and reliability parameter of the channels under polar encoding. We also show that the error probability of the scheme proposed decays exponentially with the square root of the code length. Secondly, an analysis of the optimal quantum states to be used as probes is given. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
44. A Code Rate-Compatible High-Throughput Hardware Implementation Scheme for QKD Information Reconciliation.
- Author
-
Zhu, Ming, Cui, Ke, Li, Simeng, Kong, Lei, Tang, Shibiao, and Sun, Jian
- Abstract
For high-speed quantum key distribution (QKD) post processing, information reconciliation is the most computational step, which usually acts as the system speed bottleneck. Reconciliation efficiency and operation throughput are the two most important performance parameters, but they are often compromised to each other in actual realization due to the available hardware resources. Another characteristic of the information reconciliation is that its channel is time-varying, and in order to guarantee high efficiency, some rate-compatible error correction scheme should be adopted. To cope with the above problems, this paper proposes a rate-compatible high-efficiency reconciliation algorithm based on quasi-cyclic (QC) low-density parity-check (LDPC) codes and puncturing algorithms. On the other hand, in order to obtain high throughput while maintaining low implementation complexity, the normalized min-sum algorithm (NMSA) is realized and optimized by using the fast prototyping tool of high-level synthesis (HLS). The proposed information reconciliation module was implemented on the Xilinx Zynq Ultrascale+ ZCU102 development board. Experimental results show that the maximum throughput rate of the implemented reconciliation module in this paper can reach 136 Mbps, while the efficiency factor is kept lower than 1.32 across the error rate range of 1.7%∼10.6% under the remaining frame error rate level of 10−3. The comprehensive good performance obtained by our design can strongly support the development of modern high-speed QKD systems. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Adaptive Loop Filter Hardware Design for 4K ASIC VVC Decoders.
- Author
-
Farhat, Ibrahim, Hamidouche, Wassim, Grill, Adrien, Menard, Daniel, and Deforges, Olivier
- Subjects
ADAPTIVE filters ,VIDEO coding ,HARDWARE ,IMAGE reconstruction - Abstract
Versatile video coding (VVC) is the next generation video coding standard released in July 2020. VVC introduces new coding tools enhancing the coding efficiency compared to its predecessor high efficiency video coding (HEVC). These new tools have a significant impact on the VVC software decoder with a complexity estimated to two times HEVC decoder complexity. In particular, the adaptive loop filter (ALF) introduced in VVC as an in-loop filter increases both the decoding complexity and memory usage. These concerns need to be carefully addressed regarding the design of an efficient hardware implementation of a VVC decoder. In this paper, we present an efficient hardware implementation of the ALF tool for VVC decoder. The proposed solution establishes a novel scanning order between Luma and Chroma components that reduces significantly the ALF memory. The design takes advantage of all ALF features and establishes an unified hardware module for all ALF filters. The design uses 26 regular multipliers in a pipelined architecture with a fixed throughput of 2 pixels/cycle and fixed system latency regardless of the selected filter. This design operates at 600 MHz frequency enabling to decode on ASIC platform a 4K video at 30 frames per second in 4:2:2 chroma sub-sampling format. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
46. Low-Complexity Intra Coding in Versatile Video Coding.
- Author
-
Choi, Kiho, The Van Le, Choi, Yongho, and Lee, Jin Young
- Subjects
VIDEO coding ,CONVOLUTIONAL neural networks ,MARKET entry - Abstract
Versatile Video Coding (VVC) was finalized in 2020 and offered promising coding efficiency with a bitrate reduction of about 50% for same video quality as High Efficiency Video Coding. However, its high encoding complexity is a heavy burden on real-time applications. In particular, the very high complexity in intra coding can be a big barrier into market entry. This paper presents an efficient low-complexity intra coding scheme, which employs downsampling and upsampling processes. The downsampling is simply performed by reducing the resolution of an original video in both horizontal and vertical directions. In the upsampling, convolutional neural network based super-resolution is used to increase the resolution of the reconstructed video. In addition, this paper thoroughly analyzes the performance and complexity of all intra coding tools in VVC. Experimental results demonstrate that the significantly high reduction of the encoding complexity can be achieved with acceptable video quality. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
47. On Infinite Families of Narrow-Sense Antiprimitive BCH Codes Admitting 3-Transitive Automorphism Groups and Their Consequences.
- Author
-
Liu, Qi, Ding, Cunsheng, Mesnager, Sihem, Tang, Chunming, and Tonchev, Vladimir D.
- Subjects
AUTOMORPHISM groups ,ALGEBRAIC coding theory ,REPRESENTATIONS of groups (Algebra) ,DATA transmission systems ,QUANTUM information science ,GROUP theory ,LINEAR codes ,CYCLIC codes - Abstract
The Bose-Chaudhuri-Hocquenghem (BCH) codes are a well-studied subclass of cyclic codes that have found numerous applications in error correction and notably in quantum information processing. They are widely used in data storage and communication systems. A subclass of attractive BCH codes is the narrow-sense BCH codes over the Galois field ${\mathrm {GF}}(q)$ with length $q+1$ , which are closely related to the action of the projective general linear group of degree two on the projective line. Despite its interest, not much is known about this class of BCH codes. This paper aims to study some of the codes within this class and specifically narrow-sense antiprimitive BCH codes (these codes are also linear complementary duals (LCD) codes that have interesting practical recent applications in cryptography, among other benefits). We shall use tools and combine arguments from algebraic coding theory, combinatorial designs, and group theory (group actions, representation theory of finite groups, etc.) to investigate narrow-sense antiprimitive BCH Codes and extend results from the recent literature. Notably, the dimension, the minimum distance of some $q$ -ary BCH codes with length $q+1$ , and their duals are determined in this paper. The dual codes of the narrow-sense antiprimitive BCH codes derived in this paper include almost MDS codes. Furthermore, the classification of ${\mathrm {PGL}}(2, p^{m})$ -invariant codes over ${\mathrm {GF}}(p^{h})$ is completed. As an application of this result, the $p$ -ranks of all incidence structures invariant under the projective general linear group ${\mathrm {PGL}}(2, p^{m})$ are determined. Furthermore, infinite families of narrow-sense BCH codes admitting a 3-transitive automorphism group are obtained. Via these BCH codes, a coding-theory approach to constructing the Witt spherical geometry designs is presented. The BCH codes proposed in this paper are good candidates for permutation decoding, as they have a relatively large group of automorphisms. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
48. Turbo Equalization for Two Dimensional Magnetic Recording Using Voronoi Model Averaged Statistics.
- Author
-
Mehrnoush, Morteza, Belzer, Benjamin J., Sivakumar, Krishnamoorthy, and Wood, Roger
- Subjects
DATA tapes ,MAGNETIC recording heads ,SIGNAL-to-noise ratio ,VORONOI polygons ,GAUSSIAN mixture models ,DATA parsing - Abstract
This paper considers turbo equalization for 2-D magnetic recording. Magnetic grains are modeled as Voronoi regions of randomly distributed nuclei. Bits read from the magnetic grain model flow into a 2-D intersymbol interference (2D-ISI) model including additive white Gaussian noise. At high bit densities, some bits are not written on any grain, and hence are effectively “overwritten” by surrounding bits. The proposed system iteratively exchanges log-likelihood ratios (LLRs) between a 2D-ISI equalizer based on the forward-backward algorithm and an irregular repeat-accumulate (IRA) decoder. To combat bit overwrites, the system employs a non-linear function to map 2D-ISI extrinsic output LLRs to IRA decoder input LLRs. To pass back LLRs from the IRA decoder to the 2D-ISI equalizer, we design a simple likelihood-ratio-based LLR estimator. Simulations of the proposed system that employ the perturbed-bit-centers grain model proposed in a 2010 IEEE Transactions on Magnetics paper show a 6.5% increase in user bits per grain (U/G) and a 16.4 dB signal-to-noise ratio (SNR) gain compared with the previous paper, without iterative turbo equalization. Utilizing the LLR estimator to do iterative detection results in SNR gains of up to 1.7 dB compared with non-iterative detection. The random Voronoi model employed in this paper appears to be more difficult to equalize than the grain model in the 2010 paper. The proposed system with random Voronoi model achieves 0.4422 U/G at \mathrm SNR=11.6 dB, i.e., about 8.8 Tb/in2 at (typically assumed future grain density) 20 Tgr/in2; this is almost ten times the density of current systems at 10 Tgr/in2. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
49. Blind Decoding of Control Channel for Other Users in 3GPP Standards.
- Author
-
Song, Seongwook, Kwon, Hyukjoon, and Kang, Inyup
- Subjects
INTERFERENCE channels (Telecommunications) ,LONG-Term Evolution (Telecommunications) ,DATA packeting ,RADIO resource management ,MODULATION coding - Abstract
This paper explores the blind decoding of control channels for obtaining other user identities in 3GPP specification, such as high-speed packet access and long-term evolution. The reliable decoding of control channels with user identities is crucial to mitigate inter-cell interference as well as multi-user interference. This paper exploits a method of user identity filtering followed by a method of user identity detection based on the traffic persistency, which is common to all standards. Hence, the proposed methods are applicable to all the standards regulated by 3GPP specification. In particular, this paper analyzes the proposed other user identity detection algorithm under the random coding. Simulation results show that the proposed method is reliable even at low SNRs and is also aligned with the analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
50. Optimizing Transmission Lengths for Limited Feedback With Nonbinary LDPC Examples.
- Author
-
Vakilinia, Kasra, Ranganathan, Sudarsan V. S., Divsalar, Dariush, and Wesel, Richard D.
- Subjects
LOW density parity check codes ,RANDOM noise theory ,RADIO transmitter fading ,ELECTRONIC feedback ,TELECOMMUNICATION systems - Abstract
This paper presents a general approach for optimizing the number of symbols in increments (packets of incremental redundancy) in a feedback communication system with a limited number of increments. This approach is based on a tight normal approximation on the rate for successful decoding. Applying this approach to a variety of feedback systems using nonbinary (NB) low-density parity-check (LDPC) codes shows that greater than 90% of capacity can be achieved with average blocklengths fewer than 500 transmitted bits. One result is that the performance with ten increments closely approaches the performance with an infinite number of increments. The paper focuses on binary-input additive-white Gaussian noise (BI-AWGN) channels but also demonstrates that the normal approximation works well on examples of fading channels as well as high-SNR AWGN channels that require larger QAM constellations. This paper explores both variable-length feedback codes with termination (VLFT) and the more practical variable length feedback (VLF) codes without termination that require no assumption of noiseless transmitter confirmation. For VLF, we consider both a two-phase scheme and CRC-based scheme. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.