1. Hardware Architecture for Guessing Random Additive Noise Decoding Markov Order (GRAND-MO)
- Author
-
Abbas, Syed Mohsin, Jalaleddine, Marwan, Gross, Warren J., Abbas, Syed Mohsin, Jalaleddine, Marwan, and Gross, Warren J.
- Abstract
Communication channels with memory are often sensitive to burst noise, which drastically reduces the decoding performance of standard channel code decoders, and this degradation worsens as channel memory increases. Hence, interleavers and de-interleavers are usually used to reduce the effects of burst noise at the expense of increased latency in the communication system. The delay imposed by interleavers/de-interleavers and the performance deterioration induced by channel memory are unacceptable in novel applications that require ultra-low latency and high decoding performance. Guessing Random Additive Noise Decoding (GRAND) is a universal Maximum Likelihood (ML) decoding technique for short-length and high-rate channel codes. GRAND Markov Order (GRAND-MO) is a hard-input variant of GRAND that has been specifically developed for communication channels with memory that are subject to burst noise. GRAND-MO can be used directly on hard demodulated channel signals, removing the requirement for extra interleavers/de-interleavers and considerably reducing overall latency in communication systems. This paper describes a high-throughput GRAND-MO VLSI architecture that can achieve an average throughput of up to 52 Gbps and 64 Gbps for code lengths of 128 and 79, respectively. Furthermore, we propose improvements to the GRAND-MO algorithm to simplify hardware implementation and reduce decoding complexity. When compared to GRANDAB, a hard-input variant of GRAND, the proposed improved GRAND-MO algorithm yields a decoding performance gain of 2 ∼ 3 dB at a target FER of 10 - 5. Similarly, as compared to the (79, 64) BCH code decoder, the proposed GRAND-MO decoder has a 33% reduced worst-case latency and a 2 dB gain in decoding performance at a target FER of 10 - 5. © 2022, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
- Published
- 2022