6 results
Search Results
2. Path selection under multiple QoS constraints – a practical approach.
- Author
-
Giladi, Ran, Korach, Ephraim, and Ohayon, Rony
- Subjects
- *
ALGORITHMS , *COMPUTATIONAL complexity , *METHODOLOGY , *ELECTRONIC data processing , *COMPUTER simulation , *PROBABILITY theory - Abstract
Path selection under multiple additive QoS constraints in high-speed networks is an NP-complete problem. Most of the algorithms proposed for QoS routing suffer from either excessive computational complexities or low performance. In this paper, we propose two practical heuristic algorithms that are based on iterative shortest-path computations with a dynamic cost function. The cost function comprises a combination of several QoS constraints and a single optimization metric. A simulation program evaluates the performance of the algorithms. The simulation results indicate that our algorithms give very high performance and fast running time. [ABSTRACT FROM AUTHOR]
- Published
- 2004
3. Fast POCS Based Post-Processing Technique for HDTV.
- Author
-
Kim, Yoon, Park, Chun-Su, and Ko, Sung-Jea
- Subjects
- *
ELECTRONIC data processing , *TELEVISION broadcasting , *HIGH definition television , *DIGITAL television , *COMPUTER simulation , *ALGORITHMS , *COMPUTATIONAL complexity - Abstract
In this paper, we present a novel postprocessing technique based on the theory of the projection onto convex sets (POCS) in order to reduce the blocking artifacts in digital high definition television (HDTV) images. By detecting and eliminating the undesired high-frequency components, mainly caused by blocking artifacts, we propose a new smoothness constraint set (SCS) and its projection operator in the DCT domain. In addition, we propose an improved quantization constraint set (QCS) using the correlation of DCT coefficients between adjacent blocks. In the proposed technique, the range of the QCS is efficiently reduced as close to the original DCT coefficient as possible to yield better performance of the projection onto the QCS. Computer simulation results indicate that the proposed schemes perform better than conventional algorithms. Furthermore, we introduce a fast implementation method of the proposed algorithm. The conventional POCS-based postprocessing techniques require the forward/inverse discrete cosine transform (DCT/IDCT) operations with a heavy computational burden. To reduce the computational complexity we introduce a fast implementation method of the proposed algorithm that does not perform DCT/IDCT operations. Estimates of computation savings vary between 41% and 64% depending on the task. [ABSTRACT FROM AUTHOR]
- Published
- 2003
- Full Text
- View/download PDF
4. Soft decision decoding using the hierarchy of linear block codes.
- Author
-
Sato, Tadahiro and Tanaka, Hatsukazu
- Subjects
- *
DECODERS (Electronics) , *ALGORITHMS , *COMPUTATIONAL complexity , *ERROR analysis in mathematics , *COMPUTER simulation , *ELECTRONIC data processing - Abstract
Soft decision decoding is a decoding method which can cut the decoding error probability as compared with hard decision decoding by utilizing the channel measurement information effectively. In this paper, a new soft decision decoding method by which decoding operations can be efficiently carried out by utilizing the hierarchy of linear block codes is proposed. In addition, an improved algorithm which can reduce significantly the maximum computational complexity is also proposed. The proposed decoding algorithm utilizing the hierarchical structure of codes is a decoding method by which decoding is carried out sequentially from codes with high coding rates or codes of higher hierarchy in the inclusion order, and soft decision decoding can be efficiently carried out by utilizing a number of classes of codes having inclusion relationships in a manner which is superior to Chase algorithm 2 in the decoding error probability and the computational complexity for decoding. Computer simulation results confirm that the proposed algorithm realizes decoding error probability characteristics close to those of the maximum likelihood decoding method and that decoding can be carried out efficiently at a lower computational complexity than Chase algorithm 2 when the SNR is large. In addition, the improved algorithm which can reduce significantly the maximum computational complexity can realize decoding error probability characteristics close to those of the maximum likelihood decoding method on practical communication channels with relatively large SNR. © 1999 Scripta Technica, Electron Comm Jpn Pt 3, 83(3): 108–114, 2000 [ABSTRACT FROM AUTHOR]
- Published
- 2000
- Full Text
- View/download PDF
5. Differential Detection for 16 Amplitude/Phase Shift Keying (16 DAPSK) Using Viterbi Algorithm.
- Author
-
Nemoto, Koji and Sasase, Iwao
- Subjects
- *
PHASE shift keying , *PHASE modulation , *AMPLITUDE modulation , *ELECTRONIC data processing , *COMPUTATIONAL complexity , *ALGORITHMS , *COMPUTER simulation - Abstract
Recently, differentially encoded 16 amplitude/phase shift keying (16 DAPSK) has been investigated intensively for mobile radio applications. Since the bit error rate (BER) performance of differential detection (DD) is inferior to that of coherent detection (CD), multisymbol DD of 16 DAPSK has been proposed. However, as the observation interval of multisymbol DD is increased to improve the BER performance, computational complexity increases. In this paper, the differential detection scheme of 16 DAPSK based on the maximum likelihood sequence estimation (MLSE) using Viterbi algorithm is proposed and the BER performance of the proposed scheme is investigated by computer simulation. It is shown that the BER performance of Viterbi-decoding DD outperforms that of three-symbol DD without increasing the computational complexity. [ABSTRACT FROM AUTHOR]
- Published
- 1996
- Full Text
- View/download PDF
6. Floating-Point Error Analysis for Recursive Least-Square Algorithm Using UD Factorization.
- Author
-
Tsubokawa, Hiroshi, Kubota, Hajime, and Tsujii, Shigeo
- Subjects
- *
ERROR analysis in mathematics , *MATHEMATICAL statistics , *ALGORITHMS , *NUMERICAL analysis , *ELECTRONIC data processing , *COMPUTATIONAL complexity , *COMPUTER simulation - Abstract
The real-time processing of the recursive least-square algorithm is difficult to execute by the conventional Neumann-type processor because of its large computational complexity. On the other hand, it is known that the recursive least-square algorithm based on the UD decomposition, which is equivalent to the recursive least-square algorithm, can be realized by the systolic array proposed by Rung. The systolic array to execute this algorithm is difficult to realize since it requires a tremendous number of elements and corrections. In the construction of the dedicated hardware, in general, the word length of the processor affects the processing speed and the hardware area. From such a viewpoint, it is important in the construction of the systolic array to execute the recursive least-square algorithm based on the UD decomposition that the word length of the processor should be minimized. To evaluate the word length, an error analysis is required for the finite word length operation of the recursive least-square algorithm based on the UD decomposition. This paper presents the finite word length floating-point error analysis for the recursive least-square algorithm, based on the UD decomposition, and evaluates the operation error. Then the convergence of the algorithm and the number of updates of the algorithm are evaluated analytically. Finally, by a computer simulation, the validity of the theoretical analysis for the convergence and the number of updates is verified. [ABSTRACT FROM AUTHOR]
- Published
- 1991
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.