11 results on '"*DECODING algorithms"'
Search Results
2. A latency‐reduced SC flip decoding algorithm for polar codes.
- Author
-
Yang, Dong, Mao, Yinyou, and Liu, Xingcheng
- Subjects
- *
DECODING algorithms , *COMPUTATIONAL complexity , *SIGNAL-to-noise ratio , *ERROR rates - Abstract
The successive cancellation flip (SC Flip) decoding algorithm was recently suggested for decoding polar codes, which could improve the performance of the frame error rate (FER). The performance of the SC Flip (SCF) decoding algorithm is strong, and its computational complexity tends to be the same as that of the SC decoder at medium to high signal‐to‐noise ratios (SNRs). However, the decoding latency of the SCF decoding algorithm is large. In this paper, a new method for detecting whether or not the flipped bit is correct is proposed. The proposed method makes a decision according to the changing log likelihood ratio (LLR) value caused by the flipped bit to ensure that the decoding process can be terminated in advance and the decoding latency is reduced. The simulation results show that the proposed Latency‐Reduced SCF decoding algorithm can decrease computational complexity and decoding latency, while also achieving similar decoding performance compared to its counterpart. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
3. Novel K best sphere decoders with higher order modulation for internet of things.
- Author
-
Mishra, Priyanka, Thakur, Prabhat, and Singh, G.
- Subjects
- *
QUADRATURE amplitude modulation , *INTERNET of things , *BIT error rate , *DECODING algorithms , *SPHERES , *ERROR rates , *COMPUTATIONAL complexity - Abstract
Summary: Future generation communication devices, particularly the Internet of Things (IoTs) demand for low bit error rate (BER) and higher spectral efficiencies. In this context, a low complexity detection algorithm plays a vital role in determining the performance of receivers. In this paper, we have evaluated the performance of low complexity sphere decoders using K best algorithm by modifying the conventional sphere decoders into new K and K1 sphere decoders. The proposed algorithm enables the sphere decoders to use smaller values of K for better performance. Numerical simulation is performed using MATLAB to support the validity of the proposed algorithm in terms of BER and spectral efficiency at higher order modulation techniques such as 1024 quadrature amplitude modulation (QAM) and offset quadrature amplitude modulation (OQAM). Further, we have also analyzed the computational complexity of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
4. Certain investigations on recent advances in the design of decoding algorithms using low‐density parity‐check codes and its applications.
- Author
-
Kingston Roberts, Michaelraj, Kumari, Saru, and Anguraj, Parthibaraj
- Subjects
- *
DECODING algorithms , *LOW density parity check codes , *CODING theory , *INFORMATION theory , *DATA transmission systems , *ITERATIVE decoding , *COMPUTATIONAL complexity - Abstract
Summary: Information theory coding is an impressive and most celebrated field of research that has spawned numerous extremely important solutions to the intractable problems of secure data communications. Recent advancements in error control coding methods have seen a huge surge in using low‐density parity‐check (LDPC) code‐based decoding algorithms to solve imperative issues related to reliable data transmission and reception. Till date, extensive research efforts have been consistently being made on LDPC codes which focus on algorithm‐driven and hardware‐realization‐based approaches. The main intension of this research work is to provide an extensive systematic elucidation on the recent advancements in LDPC decoding algorithms. In addition, a thorough performance evaluation and analysis of several outstanding LDPC decoding techniques is presented. Finally, conclusions are drawn by summarizing the important research findings, interesting open problems, current challenges and broader perspectives for future directions of research. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Simplified sparse code multiple access receiver by using truncated messages.
- Author
-
Ke Lai, Lei Wen, Jing Lei, Jie Zhong, GaoJie Chen, and XiaoTian Zhou
- Subjects
- *
WIRELESS communications , *DECODING algorithms , *MESSAGE passing (Computer science) , *NETWORK performance , *COMPUTATIONAL complexity - Abstract
Sparse code multiple access (SCMA) is a promising candidate air interface of the next generation mobile networks. However, the decoding complexity of current message passing algorithm for SCMA is very high. In this study, the authors map SCMA constellation to q-order Galois field (GF(q)) and introduce a trellis representation to SCMA. Based on the trellis representation, they propose low-complexity decoding algorithms for SCMA by using truncated messages, which is referred as extended max-log (EML) algorithm. As the truncated length of EML is unitary for each user, they further propose a channel-adaptive EMLalgorithm to truncate the messages with a rule that can be adaptive to the channel state. Simulation results show that the proposed schemes obtain a low computational complexity with only a slight performance degradation when the truncated length is selected appropriately. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
6. Lattice reduction-based iterative receivers: using partial bit-wise MMSE filter with randomised sampling and MAP-aided integer perturbation.
- Author
-
Lin Bai, Tian Li, Lewen Zhao, and Jinho Choi
- Subjects
- *
DECODING algorithms , *MIMO systems , *QUANTUM perturbations , *RANDOMIZED controlled trials , *COMPUTATIONAL complexity - Abstract
For iterative detection and decoding (IDD) in multiple-input multiple-output systems, the maximum a posteriori probability (MAP) detector would be ideal in terms of the performance. However, due to its high computational complexity, various suboptimal low-complexity approximate MAP detectors have been studied. In this study, a lattice reduction (LR)-based detector is considered for a near-optimal performance for IDD. The authors improve further the performance by employing a partial bit-wise minimum mean square error (MMSE) approach with randomised sampling, which has a lower complexity than that of the full bit-wise MMSE method. Moreover, the list of candidate vectors obtained by randomised sampling is extended using a MAP-aided integer perturbation algorithm for a better performance with low additional complexity. Through simulation results, it is shown that a near-optimal performance can be obtained which is better than that of the LR-based randomised successive interference cancellation and the full bit-wise MMSE methods. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
7. Resource allocation in MIMO-OFDM-based cooperative cognitive radio networks: optimal and suboptimal low complexity approaches.
- Author
-
Adian, M. G. and Aghaeinia, H.
- Subjects
- *
COGNITIVE radio , *RESOURCE allocation , *COMPUTATIONAL complexity , *DECODING algorithms , *ANTENNAS (Electronics) - Abstract
The problem of resources allocation in multiple-input multiple-output-orthogonal frequency division multiplexing based cooperative cognitive radio networks is considered, in this paper. The cooperation strategy between the secondary users is decode-and-forward (DF) strategy. In order to obtain an optimal subcarrier pairing, relay selection and power allocation in the system, the dual decomposition technique is recruited. The optimal resource allocation is realized under the individual power constraints in source and relays so that the sum rate is maximized while the interference induced to the primary system is kept below a pre-specified interference temperature limit. Moreover, because of the high computational complexity of the optimal approach, a suboptimal algorithm is further proposed. The jointly allocation of the resources in suboptimal algorithm is carried out taking into account the channel qualities, the DF cooperation strategy, the interference induced to the primary system and the individual power budgets. The performance of the different approaches and the impact of the constraint values and deploying multiple antennas at users are discussed through the numerical simulation results. Copyright © 2014 John Wiley & Sons, Ltd. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
8. Complexity reduced turbo-bit-interleaved coded modulation with iterative decoding.
- Author
-
Donghoon Kang and Wangrok Oh
- Subjects
- *
COMPUTATIONAL complexity , *MODULATION coding , *DECODING algorithms , *ITERATIVE decoding , *INFORMATION sharing , *RADIO detectors - Abstract
Spectral efficiency of turbo code can be improved by use of coded modulation methods such as bit-interleaved coded modulation (BICM) with iterative decoding (BICM-ID). On the other hand, the performance of the turbo-BICM can be further improved by iteratively exchanging information between a decoder and a demodulator within a receiver and this scheme is known as the turbo-BICM-ID. Unfortunately, the required complexity of the turbo-BICM-ID is much higher than that of the turbo bit-interleaved coded modulation with iterative decoding turbo-BICM-ID. In this study, the authors propose a complexity reduced turbo-BICM-ID by optimising the mapping of coded bits to modulation symbols and the amount of information fed back from the decoder to the demodulator. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
9. Mixed modified weighted bit-flipping decoding of low-density parity-check codes.
- Author
-
Haiyi Huang, Yige Wang, and Gang Wei
- Subjects
- *
LOW density parity check codes , *DECODING algorithms , *COMPUTATIONAL complexity , *ADDITIVE white Gaussian noise channels , *RANDOM variables - Abstract
In this study, a new weighted bit-flipping (WBF) algorithm called mixed modified WBF (MM-WBF) decoding is proposed for low-density parity-check codes. The new algorithm is built by mixing two WBF algorithms, one acting as the main decoding algorithm and the other acting as the auxiliary. Simulation results show that the new algorithm achieves a notable performance gain over reliability ratio-based WBF, improved M-WBF and low-complexity WBF algorithms with almost the same computational complexity. Compared with some belief-propagation-based algorithms, MM-WBF also provides an appealing performance against complexity trade-off. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
10. A novel decoding scheme based on recalculation for double binary convolutional turbo code.
- Author
-
Zhan, Ming, Wu, Jun, and Zhou, Liang
- Subjects
- *
DECODING algorithms , *ITERATIVE decoding , *TURBO codes , *ALGORITHMS , *LAST in, first out (Accounting) - Abstract
To decrease the storage complexity of a double binary convolutional turbo code (DB-CTC) decoder, a novel decoding scheme is proposed in this paper. Different from the conventional decoding scheme, only a part of the state metrics is stored in the last-in first-out (LIFO) state metrics cache (SMC). Based on an improved maximum a posteriori probability (MAP) algorithm, we present a method to recalculate the unstored state metrics at the corresponding decoding time slot, and discuss in detail the procedures of the recalculation are discussed. Because of the compare-select-recalculate processing operations, compared to the classical decoding scheme, the proposed decoding scheme reduces the storage complexity of SMC and the amount of memory accesses by approximately 40% while limiting involved computational cost. Moreover, simulation results show that the proposed scheme achieves good decoding performance, which is close to that of the well-known Log-MAP algorithm. © 2013 Institute of Electrical Engineers of Japan. Published by John Wiley & Sons, Inc. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF
11. Unequal error protection codes derived from SEC-DED codes.
- Author
-
Reviriego, Pedro, Shan Shan Liu, Sánchez-Macián, Alfonso, Liyi Xiao, and Maestro, Juan Antonio
- Subjects
- *
ERROR correction (Information theory) , *ERROR detection (Information theory) , *ENCODING , *DECODING algorithms , *COMPUTATIONAL complexity - Abstract
Error correction codes are commonly used to protect the data stored in memories from errors. Among the codes used, single error correction double error detection (SEC-DED) codes are probably the most common due to their simplicity. In some applications, the importance of the bits is different, being some of them critical while others can tolerate some errors. This is the case, for example, in some multimedia and signal processing systems. For those applications, unequal error protection (UEP) codes that provide different protection for different bits can be used. In many cases, the bits that require extra protection are only a few. Therefore, it would be convenient to use a traditional code extended to provide additional protection for a few bits. A simple method to derive UEP codes from SEC-DED codes is presented. The proposed UEP codes protect a few bits against double errors and behave as SEC-DED codes for the rest. Encoding and decoding complexity is only slightly larger than that of an SECDED code and the implementation is a simple modification of the SEC-DED implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.