19 results on '"Kavcic, Aleksandar"'
Search Results
2. Low-complexity soft-decoding algorithms for Reed-Solomon codes - part II: soft-input soft-output iterative decoding
- Author
-
Bellorado, Jason, Kavcic, Aleksandar, Marrow, Marcus, and Ping, Li
- Subjects
Algorithm ,Reed-Solomon codes -- Research ,Decoders -- Research ,Algorithms -- Research ,Iterative methods (Mathematics) -- Research - Published
- 2010
3. Low-complexity soft-decoding algorithms for Reed-Solomon codes - part I: an algebraic soft-in hard-out chase decoder
- Author
-
Bellorado, Jason and Kavcic, Aleksandar
- Subjects
Algorithm ,Decoders -- Research ,Reed-Solomon codes -- Research ,Algorithms -- Research - Published
- 2010
4. A generalization of the Blahut--Arimoto algorithm to finite-state channels
- Author
-
Vontobel, Pascal O., Kavcic, Aleksandar, Arnold, Dieter M., and Loeliger, Hans-Andrea
- Subjects
Algorithm ,Algorithms -- Analysis ,Finite state automata -- Analysis - Abstract
The classical Blahut--Arimoto algorithm (BAA) is a well-known algorithm that optimizes a discrete memoryless source (DMS) at the input of a discrete memoryless channel (DMC) in order to maximize the mutual information between channel input and output. This paper considers the problem of optimizing finite-state machine sources (FSMSs) at the input of finite-state machine channels (FSMCs) in order to maximize the mutual information rate between channel input and output. Our main result is an algorithm that efficiently solves this problem numerically; thus, we call the proposed procedure the generalized BAA. It includes as special cases not only the classical BAA but also an algorithm that solves the problem of finding the capacity-achieving input distribution for finite-state channels with no noise. While we present theorems that characterize the local behavior of the generalized BAA, there are still open questions concerning its global behavior; these open questions are addressed by some conjectures at the end of the paper. Apart from these algorithmic issues, our results lead to insights regarding the local conditions that the information-rate-maximizing FSMSs fulfill; these observations naturally generalize the well-known Kuhn--Tucker conditions that are fulfilled by capacity-achieving DMSs at the input of DMCs. Index Terms--Blahut--Arimoto algorithm (BAA), capacity, constrained capacity, finite-state machine channels (FSMCs), finite-state machine sources (FSMSs), information rate, optimization, run-length constraints.
- Published
- 2008
5. On the feedback capacity of power-constrained Gaussian noise channels with memory
- Author
-
Yang, Shaohua, Kavcic, Aleksandar, and Tatikonda, Sekhar
- Subjects
Random noise theory -- Analysis ,Dynamic programming -- Analysis ,Mathematical optimization -- Analysis - Abstract
For a stationary additive Gaussian-noise channel with a rational noise power spectrum of a finite-order L, we derive two new results for the feedback capacity under an average channel input power constraint. First, we show that a very simple feedback-dependent Gauss-Markov source achieves the feedback capacity, and that Kalman-Bucy filtering is optimal for processing the feedback. Based on these results, we develop a new method for optimizing the channel inputs for achieving the Cover-Pombra block-length- n feedback capacity by using a dynamic programming approach that decomposes the computation into n sequentially identical optimization problems where each stage involves optimizing O([L.sup.2]) variables. Second, we derive the explicit maximal information rate for stationary feedback-dependent sources. In general, evaluating the maximal information rate for stationary sources requires solving only a few equations by simple nonlinear programming. For first-order autoregressive and/or moving average (ARMA) noise channels, this optimization admits a closed-form maximal information rate formula. The maximal information rate for stationary sources is a lower bound on the feedback capacity, and it equals the feedback capacity if the long-standing conjecture, that stationary sources achieve the feedback capacity, holds. Index Terms--Channel capacity, directed information, dynamic programming, feedback capacity, Gauss-Markov source, information rate, intersymbol interference, Kalman-Bucy filter, linear Gaussian noise channel, noise whitening filter.
- Published
- 2007
6. Simulation-based computation of information rates for channels with memory
- Author
-
Arnold, Dieter M., Loeliger, Hans-Andrea, Vontobel, Pascal O., Kavcic, Aleksandar, and Zeng, Wei
- Subjects
Markov processes -- Usage ,Finite state automata -- Analysis ,Approximation theory -- Usage - Abstract
The information rate of finite-state source/channel models can be accurately estimated by sampling both a long channel input sequence and the corresponding channel output sequence, followed by a forward sum-product recursion on the joint source/channel trellis. This method is extended to compute upper and lower bounds on the information rate of very general channels with memory by means of finite-state approximations. Further upper and lower bounds can be computed by reduced-state methods. Index Terms--Bounds, channel capacity, finite-state models, hidden-Markov models, information rate, sum-product algorithm, trellises.
- Published
- 2006
7. Error exponents for finite-hypothesis channel identification
- Author
-
Mitran, Patrick and Kavcic, Aleksandar
- Subjects
Signal detection (Electronics) -- Methods ,Signal detection (Electronics) -- Analysis ,Data communications -- Analysis - Abstract
We consider the problem of designing optimal probing signals for finite-hypothesis testing. Equivalently, we cast the problem as the design of optimal channel input sequences for identifying a discrete channel under observation from a finite set of known channels. The optimality criterion that we employ is the exponent of the Bayesian probability of error. In our study, we consider a feedforward scenario where there is no feedback from the channel output to the signal selector at the channel input and a feedback scenario where the past channel outputs are revealed to the signal selector. In the feedforward scenario, only the type of the input sequence matters and our main result is an expression for the error exponent in terms of the limiting distribution of the input sequence. In the feedback case, we show that when discriminating between two channels, the optimal scheme in the first scenario is simultaneously the optimal time-invariant Markov feedback policy of any order. Index Terms--Bayesian hypothesis testing, classification, detection theory, Chernoff's theorem, error exponent, feedback, method of types, sequential detection, signal selection, waveform selection.
- Published
- 2006
8. Matched information rate codes for partial response channels
- Author
-
Kavcic, Aleksandar, Ma, Xiao, and Varnica, Nedeljko
- Subjects
Markov processes -- Research ,Information theory -- Research - Abstract
In this paper, we design capacity-approaching codes for partial response channels. The codes are constructed as concatenations of inner trellis codes and outer low-density parity-check (LDPC) codes. Unlike previous constructions of trellis codes for partial response channels, we disregard any algebraic properties (e.g., the minimum distance or the run-length limit) in our design of the trellis code. Our design is purely probabilistic in that we construct the inner trellis code to mimic the transition probabilities of a Markov process that achieves a high (capacity-approaching) information rate. Hence, we name it a matched information rate (MIR) design. We provide a set of five design rules for constructions of capacity-approaching MIR inner trellis codes. We optimize the outer LDPC code using density evolution tools specially modified to fit the superchannel consisting of the inner MIR trellis code concatenated with the partial response channel. Using this strategy, we design degree sequences of irregular LDPC codes whose noise tolerance thresholds are only fractions of a decibel away from the capacity. Examples of code constructions are shown for channels both with and without spectral nulls. Index Terms--Channel capacity, concatenated codes, density evolution, low-density parity-check (LDPC) codes, Markov capacity, Markov processes, partial response channel, superchannel, trellis code.
- Published
- 2005
9. Feedback capacity of finite-state machine channels
- Author
-
Yang, Shaohua, Kavcic, Aleksandar, and Tatikonda, Sekhar
- Subjects
Algorithm ,Information theory -- Research ,Algorithms -- Research - Abstract
We consider a finite-state machine channel with a finite memory length (e.g., finite length intersymbol interference channels with finite input alphabets--also known as partial response channels). For such a finite-state machine channel, we show that feedback-dependent Markov sources achieve the feedback capacity, and that the required memory length of the Markov process matches the memory length of the channel. Further, we show that the whole history of feedback is summarized by the causal posterior channel state distribution, which is computed by the sum-product forward recursion of the Bahl-Cocke-Jelinek-Raviv (BCJR) (Baum-Welch, discrete-time Wonham filtering) algorithm. These results drastically reduce the space over which the optimal feedback-dependent source distribution needs to be sought. Further, the feedback capacity computation may then be formulated as an average-reward-per-stage stochastic control problem, which is solved by dynamic programming. With the knowledge of the capacity-achieving source distribution, the value of the capacity is easily estimated using Markov chain Monte Carlo methods. When the feedback is delayed, we show that the feedback capacity can be computed by similar procedures. We also show that the delayed feedback capacity is a tight upper bound on the feedforward capacity by comparing it to tight existing lower bounds. We demonstrate the applicability of the method by computing the feedback capacity of partial response channels and the feedback capacity of run-length-limited (RLL) sequences over binary symmetric channels (BSCs). Index Terms--Channel capacity, delayed feedback capacity, directed information rate, dynamic programming, feedback capacity, finite-state machine channels, intersymbol interference, partial response channels, run-length-limited (RLL) sequences.
- Published
- 2005
10. Equal-diagonal QR decomposition and its application to precoder design for successive-cancellation detection
- Author
-
Zhang, Jian-Kang, Kavcic, Aleksandar, and Wong, Kon Max
- Subjects
Wireless technology ,Wireless communication systems -- Research ,Mobile communication systems -- Research ,Information theory -- Research - Abstract
In multiple-input multiple-output (MIMO) multiuser detection theory, the QR decomposition of the channel matrix H can be used to form the back-cancellation detector. In this paper, we propose an optimal QR decomposition, which we call the equal-diagonal QR decomposition, or briefly the QRS decomposition. We apply the decomposition to precoded successive-cancellation detection, where we assume that both the transmitter and the receiver have perfect channel knowledge. We show that, for any channel matrix H, there exists a unitary precoder matrix S, such that HS = QR, where the nonzero diagonal entries of the upper triangular matrix R in the QR decomposition of H8 are all equal to each other. The precoder and the resulting successive-cancellation detector have the following properties, a) The minimum Euclidean distance between two signal points at the channel output is equal to the minimum Euclidean distance between two constellation points at the precoder input up to a multiplicative factor that equals the diagonal entry in the R-factor. b) The superchannel HS naturally exhibits an optimally ordered column permutation, i.e., the optimal detection order for the vertical Bell Labs layered space-time (V-BLAST) detector is the natural order, c) The precoder S minimizes the block error probability of the QR successive cancellation detector, d) A lower and an upper bound for the free distance at the channel output is expressible in terms of the diagonal entries of the R-factor in the QR decomposition of a channel matrix, e) The precoder 8 maximizes the lower bound of the channel's free distance subject to a power constraint, f) For the optimal precoder S, the performance of the QR detector is asymptotically (at large signal-to-noise ratios (SNRs)) equivalent to that of the maximum-likelihood detector (MLD) that uses the same precoder. Further, in this paper we consider two multiplexing schemes: time-division multiple access (TDMA) and orthogonal frequency-division multiplexing (OFDM). We design the optimal precoder for binary phase-shift keying (BPSK) with these multiplexing schemes, but outline the procedure to extend the method to nonbinary schemes such as pulse amplitude modulation (PAM), phase-shift keying (PSK), and quadrature amplitude modulation (QAM). Finally, examples are given that illustrate the performance of the precoder and the corresponding successive cancellation detector. Index Terms--Maximum-likelihood detection (MLD), minimum distance, multiple-input multiple-output (MIMO) systems, orthogonal frequency-division multiplexing (OFDM), precoders, QR decomposition, successive cancellation detection, time-division multiple access (TDMA).
- Published
- 2005
11. Binary intersymbol interference channels: Gallager codes, density evolution, and code performance bounds
- Author
-
Kavcic, Aleksandar, Ma, Xiao, and Mitzenmacher, Michael
- Subjects
Communication -- Research - Abstract
We study the limits of performance of Gallager codes (low-density parity-check (LDPC) codes) over binary linear inter-symbol interference (ISI) channels with additive white Gaussian noise (AWGN). Using the graph representations of the channel, the code, and the sum-product message-passing detector/decoder, we prove two error concentration theorems. Our proofs expand on previous work by handling complications introduced by the channel memory. We circumvent these problems by considering not just linear Gallager codes but also their cosets and by distinguishing between different types of message flow neighborhoods depending on the actual transmitted symbols. We compute the noise tolerance threshold using a suitably developed density evolution algorithm and verify, by simulation, that the thresholds represent accurate predictions of the performance of the iterative sum-product algorithm for finite (but large) block lengths. We also demonstrate that for high rates, the thresholds are very close to the theoretical limit of performance for Gallager codes over ISI channels. If C denotes the capacity of a binary ISI channel and if [C.sub.i.i.d.] denotes the maximal achievable mutual information rate when the channel inputs are independent and identically distributed (i.i.d.) binary random variables ([C.sub.i.i.d.] [less than or equal to] C), we prove that the maximum information rate achievable by the sum-product decoder of a Gallager (coset) code is upper-bounded by [C.sub.i.i.d.]. The last topic investigated is the performance limit of the decoder if the trellis portion of the sum-product algorithm is executed only once; this demonstrates the potential for trading off the computational requirements and the performance of the decoder. Index Terms--Bahl-Cocke-Jelinek-Raviv (BCJR)-once bound, channel capacity, density evolution, Galager codes, independent and identically distributed (i.i.d.) capacity, intersymbol interference (ISI) channel, low-density parity-check (LDPC) codes, sum-product algorithm, turbo equalization.
- Published
- 2003
12. Path partitions and forward-only trellis algorithms
- Author
-
Ma, Xiao and Kavcic, Aleksandar
- Subjects
Electrical engineering ,Electrical engineering -- Research ,Information theory -- Research ,Lattice theory -- Usage ,Coding theory -- Usage ,Markov processes -- Research - Abstract
This is a semitutorial paper on trellis-based algorithms. We argue that most decoding/detection algorithms described on trellises can be formulated as path-partitioning algorithms, with proper definitions of mappings from subsets of paths to metrics of subsets. Thereby, the only two operations needed are path-concatenation and path-collection, which play the roles of multiplication and addition, respectively. Furthermore, we show that the trellis structure permits the path-partitioning algorithms to be formulated as forward-only algorithms (with structures resembling the Viterbi algorithm), thus eliminating the need for backward computations regardless of what task needs to be performed on the trellis. While all of the actual decoding/detection algorithms presented here are rederivations of variations of previously known methods, we believe that the exposition of the algorithms in a unified manner as forward-only path-partitioning algorithms is the most intuitive manner in which to generalize the Viterbi algorithm. We also believe that this approach may, in fact, influence the practical implementation of the algorithms as well as influence the construction of other forward-only algorithms (e.g., byte-wise forward-only detection algorithms). Index Terms--Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm, forward-backward algorithm, forward-only algorithm, Markov processes, sum-product algorithm, trellis graph, Viterbi algorithm.
- Published
- 2003
13. The Viterbi Algorithm and Markov Noise Memory
- Author
-
Kavcic, Aleksandar and Moura, Jose M.F.
- Subjects
Information theory -- Research ,Algorithms -- Research - Abstract
This work designs sequence detectors for channels with intersymbol interference (ISI) and correlated (and/or signal-dependent) noise. We describe three major contributions, i) First, by modeling the noise as a finite-order Markov process, we derive the optimal maximum-likelihood sequence detector (MLSD) and the optimal maximum a posteriori (MAP) sequence detector, extending to the correlated noise case the Viterbi algorithm. We show that, when the signal-dependent noise is conditionally Gauss-Markov, the branch metrics in the MLSD are computed from the conditional second-order noise statistics. We evaluate the branch metrics using a bank of finite impulse response (FIR) filters, ii) Second, we characterize the error performance of the MLSD and MAP sequence detector. The error analysis of these detectors is complicated by the correlation asymmetry of the channel noise. We derive upper and lower bounds and computationally efficient approximations to these bounds based on the banded structure of the inverses of Gauss-Markov covariance matrices. An experimental study shows the tightness of these bounds, iii) Finally, we derive several classes of suboptimal sequence detectors, and demonstrate how these and others available in the literature relate to the MLSD. We compare their error rate performance and their relative computational complexity, and show how the structure of the MLSD and the performance evaluation guide us in choosing a best compromise between several types of suboptimal sequence detectors. Index Terms: Correlated noise, Gauss-Markov processes, intersymbol interference, Markov channel noise, Markov memory, maximum-likelihood sequence detection, Viterbi algorithm.
- Published
- 2000
14. Capacity of Multilevel NAND Flash Memory Channels
- Author
-
Li, Yonglong, primary, Kavcic, Aleksandar, additional, and Han, Guangyue, additional
- Published
- 2017
- Full Text
- View/download PDF
15. Matrices with Banded Inverses: Inversion Algorithms and Factorization of Gauss--Markov Processes
- Author
-
Kavcic, Aleksandar and Moura, Jose M. F.
- Subjects
Gaussian processes -- Research ,Markov processes -- Research ,Matrices -- Analysis - Abstract
The paper considers the inversion of full matrices whose inverses are L-banded. We derive a nested inversion algorithm for such matrices. Applied to a tridiagonal matrix, the algorithm provides its explicit inverse as an element-wise product (Hadamard product) of three matrices. When related to Gauss-Markov random processes (GMrp), this result provides a closed-form factored expression for the covariance matrix of a first-order GMrp. This factored form leads to the interpretation of a first-order GMrp as the product of three independent processes: a forward independent-increments process, a backward independent-increments process, and a variance-stationary process. We explore the nonuniqueness of the factorization and design it so that the forward and backward factor processes have minimum energy. We then consider the issue of approximating general nonstationary Gaussian processes by Gauss-Markov processes under two optimality criteria: the Kullback-Leibler distance and maximum entropy. The problem reduces to approximating general covariances by covariance matrices whose inverses are banded. Our inversion result is an efficient algorithmic solution to this problem. We evaluate the information loss between the original process and its Gauss-Markov approximation. Index Terms--Banded matrix, Cholesky decomposition, Gauss-Markov processes, inhomogeneous autoregressive processes, Kullback-Leibler distance, L-band complement, maximum-entropy method, potential matrix, tridiagonal matrix.
- Published
- 2000
16. Upper Bounds on the Capacities of Noncontrollable Finite-State Channels With/Without Feedback
- Author
-
Huang, Xiujie, primary, Kavcic, Aleksandar, additional, and Ma, Xiao, additional
- Published
- 2012
- Full Text
- View/download PDF
17. Achievable Rates of MIMO Systems With Linear Precoding and Iterative LMMSE Detection.
- Author
-
Yuan, Xiaojun, Ping, Li, Xu, Chongbin, and Kavcic, Aleksandar
- Subjects
MIMO systems ,LINEAR systems ,ORTHOGONAL frequency division multiplexing ,TRANSMITTERS (Communication) ,CODING theory - Abstract
We establish area theorems for iterative detection and decoding (or simply, iterative detection) over coded linear systems, including multiple-input multiple-output channels, intersymbol interference channels, and orthogonal frequency-division multiplexing systems. We propose a linear precoding technique that asymptotically ensures the Gaussianness of the messages passed in iterative detection, as the transmission block length tends to infinity. Area theorems are established to characterize the behavior of the iterative receiver. We show that, for unconstrained signaling, the proposed single-code scheme with linear precoding and iterative linear minimum mean-square error (LMMSE) detection is potentially information lossless, under various assumptions on the availability of the channel state information at the transmitter. We further show that, for constrained signaling, our proposed single-code scheme considerably outperforms the conventional multicode parallel transmission scheme based on singular value decomposition and water-filling power allocation. Numerical results are provided to verify our analysis. [ABSTRACT FROM PUBLISHER]
- Published
- 2014
- Full Text
- View/download PDF
18. Reliability Distributions of Truncated Max-Log-MAP (MLM) Detectors Applied to Binary ISI Channels.
- Author
-
Lim, Fabian and Kavcic, Aleksandar
- Subjects
- *
RADIO detectors , *DISTRIBUTION (Probability theory) , *CHANNEL capacity (Telecommunications) , *BINARY control systems , *COMPUTER algorithms - Abstract
The max-log-MAP (MLM) receiver is an approximated version of the well-known Bahl–Cocke–Jelinek-Raviv algorithm. The MLM algorithm is attractive due to its implementation simplicity. In practice, sliding-window implementations are preferred, whereby truncated signaling neighborhoods (around each transmission time instant) are considered. In this paper, we consider binary signaling sliding-window MLM receivers, where the MLM detector is truncated to a length-m signaling neighborhood. Here, truncation is used to ease the burden of analysis. For any number n of chosen times instants, we derive exact expressions for both 1) the joint distribution of the MLM symbol reliabilities, and 2) the joint probability of the erroneous MLM symbol detections. We show that the obtained expressions can be efficiently evaluated using Monte–Carlo techniques. The most computationally expensive operation (in each Monte–Carlo trial) is an eigenvalue decomposition of a size 2mn \times 2mn matrix. The proposed method handles various scenarios such as correlated noise distributions, modulation coding, etc. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
19. Path Partitions and Forward-Only Trellis Algorithms.
- Author
-
Xiao Ma and Kavcic, Aleksandar
- Subjects
- *
ALGORITHMS , *PATH analysis (Statistics) - Abstract
This is a semitutorial paper on trellis-based algorithms. We argue that most decoding/detection algorithms described on trellises can be formulated as path-partitioning algorithms, with proper definitions of mappings from subsets of paths to metrics of subsets. Thereby, the only two operations needed are path-concatenation and path-collection, which play the roles of multiplication and addition, respectively. Furthermore, we show that the trellis structure permits the path-partitioning algorithms to be formulated as forward-only algorithms (with structures resembling the Viterbi algorithm), thus eliminating the need for backward computations regardless of what task needs to be performed on the trellis. While all of the actual decoding/detection algorithms presented here are rederivations of variations of previously known methods, we believe that the exposition of the algorithms in a unified manner as forward-only path-partitioning algorithms is the most intuitive manner in which to generalize the Viterbi algorithm. We also believe that this approach may, in fact, influence the practical implementation of the algorithms as well as influence the construction of other forward-only algorithms (e.g., byte-wise forward-only detection algorithms). [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.