9,967 results
Search Results
2. Method for Assessing the Reliability of Information Under Conditions of Uncertainty Through the Use of a Priori and a Posteriori Information of the Decoder of Turbo Codes
- Author
-
Zaitsev, Sergei, Ryndych, Yevhen, Sokorynska, Natalia, Zaitseva, Liliia, Kurbet, Pavel, Horlynskyi, Borys, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Shkarlet, Serhiy, editor, Morozov, Anatoliy, editor, Palagin, Alexander, editor, Vinnikov, Dmitri, editor, Stoianov, Nikolai, editor, Zhelezniak, Mark, editor, and Kazymyr, Volodymyr, editor
- Published
- 2023
- Full Text
- View/download PDF
3. Comparing Different Decodings for Posit Arithmetic
- Author
-
Murillo, Raul, Mallasén, David, Del Barrio, Alberto A., Botella, Guillermo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Gustafson, John, editor, and Dimitrov, Vassil, editor
- Published
- 2022
- Full Text
- View/download PDF
4. Neural-like Real-Time Data Protection and Transmission System
- Author
-
Tsmots, Ivan, Rabyk, Vasyl, Skorokhoda, Oleksa, Tsymbal, Yurii, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Shakhovska, Natalya, editor, and Medykovskyy, Mykola O., editor
- Published
- 2021
- Full Text
- View/download PDF
5. Decoding up to 4 Errors in Hyperbolic-Like Abelian Codes by the Sakata Algorithm
- Author
-
Bernal, José Joaquín, Simón, Juan Jacobo, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bajard, Jean Claude, editor, and Topuzoğlu, Alev, editor
- Published
- 2021
- Full Text
- View/download PDF
6. Hamming codes for wet paper steganography
- Author
-
Munuera, Carlos
- Published
- 2015
- Full Text
- View/download PDF
7. Influences of the Trained State Model into the Decoding of Elbow Motion Using Kalman Filter
- Author
-
Veslin, E. Y., Dutra, M. S., Bevilacqua, L., Raptopoulos, L. S. C., Andrade, W. S., Soares, J. G. M., Barbosa, Simone Diniz Junqueira, Editorial Board Member, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Orjuela-Cañón, Alvaro David, editor, Figueroa-García, Juan Carlos, editor, and Arias-Londoño, Julián David, editor
- Published
- 2019
- Full Text
- View/download PDF
8. A CRC-Aided LDPC Erasure Decoding Algorithm for SEUs Correcting in Small Satellites
- Author
-
Zheng, Hao, Song, Zinan, Zhang, Shuyi, Chai, Shuo, Shao, Liwei, Akan, Ozgur, Series editor, Bellavista, Paolo, Series editor, Cao, Jiannong, Series editor, Coulson, Geoffrey, Series editor, Dressler, Falko, Series editor, Ferrari, Domenico, Series editor, Gerla, Mario, Series editor, Kobayashi, Hisashi, Series editor, Palazzo, Sergio, Series editor, Sahni, Sartaj, Series editor, Shen, Xuemin Sherman, Series editor, Stan, Mircea, Series editor, Xiaohua, Jia, Series editor, Zomaya, Albert Y., Series editor, and Xin-lin, Huang, editor
- Published
- 2017
- Full Text
- View/download PDF
9. Unsupervised Analysis of Event-Related Potentials (ERPs) During an Emotional Go/NoGo Task
- Author
-
Masulli, Paolo, Masulli, Francesco, Rovetta, Stefano, Lintas, Alessandra, Villa, Alessandro E. P., Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Petrosino, Alfredo, editor, Loia, Vincenzo, editor, and Pedrycz, Witold, editor
- Published
- 2017
- Full Text
- View/download PDF
10. Torn-Paper Coding.
- Author
-
Shomorony, Ilan and Vahid, Alireza
- Subjects
- *
SEQUENTIAL analysis , *DATA warehousing - Abstract
We consider the problem of communicating over a channel that randomly “tears” the message block into small pieces of different sizes and shuffles them. For the binary torn-paper channel with block length $n$ and pieces of length ${\mathrm{ Geometric}}(p_{n})$ , we characterize the capacity as $C = e^{-\alpha }$ , where $\alpha = \lim _{n\to \infty } p_{n} \log n$. Our results show that the case of ${\mathrm{ Geometric}}(p_{n})$ -length fragments and the case of deterministic length- $(1/p_{n})$ fragments are qualitatively different and, surprisingly, the capacity of the former is larger. Intuitively, this is due to the fact that, in the random fragments case, large fragments are sometimes observed, which boosts the capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. On Microstructure Estimation Using Flatbed Scanners for Paper Surface-Based Authentication.
- Author
-
Liu, Runze and Wong, Chau-Wai
- Abstract
Paper surfaces under the microscopic view are observed to be formed by intertwisted wood fibers. Such structures of paper surfaces are unique from one location to another and are almost impossible to duplicate. Previous work used microscopic surface normals to characterize such intrinsic structures as a “fingerprint” of paper for security and forensic applications. In this work, we examine several key research questions of feature extraction in both scientific and engineering aspects to facilitate the deployment of paper surface-based authentication when flatbed scanners are used as the acquisition device. We analytically show that, under the unique optical setup of flatbed scanners, the specular reflection does not play a role in norm map estimation. We verify, using a larger dataset than prior work, that the scanner-acquired norm maps, although blurred, are consistent with those measured by confocal microscopes. We confirm that, when choosing an authentication feature, high spatial-frequency subbands of the heightmap are more powerful than the norm map. Finally, we show that it is possible to empirically calculate the physical dimensions of the paper patch needed to achieve a certain authentication performance in equal error rate (EER). We analytically show that log(EER) is decreasing linearly in the edge length of a paper patch. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
12. Watermarking Digital Images in the Frequency Domain: Performance and Attack Issues
- Author
-
Chroni, Maria, Fylakis, Angelos, Nikolopoulos, Stavros D., van der Aalst, Wil, Series editor, Mylopoulos, John, Series editor, Rosemann, Michael, Series editor, Shaw, Michael J., Series editor, Szyperski, Clemens, Series editor, Krempels, Karl-Heinz, editor, and Stocker, Alexander, editor
- Published
- 2014
- Full Text
- View/download PDF
13. Learning to Rank from Medical Imaging Data
- Author
-
Pedregosa, Fabian, Cauvet, Elodie, Varoquaux, Gaël, Pallier, Christophe, Thirion, Bertrand, Gramfort, Alexandre, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Wang, Fei, editor, Shen, Dinggang, editor, Yan, Pingkun, editor, and Suzuki, Kenji, editor
- Published
- 2012
- Full Text
- View/download PDF
14. Decoding Syllables from Human fMRI Activity
- Author
-
Otaka, Yohei, Osu, Rieko, Kawato, Mitsuo, Liu, Meigen, Murata, Satoshi, Kamitani, Yukiyasu, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Ishikawa, Masumi, editor, Doya, Kenji, editor, Miyamoto, Hiroyuki, editor, and Yamakawa, Takeshi, editor
- Published
- 2008
- Full Text
- View/download PDF
15. Ergodic Fading MIMO Dirty Paper and Broadcast Channels: Capacity Bounds and Lattice Strategies.
- Author
-
Hindy, Ahmed and Nosratinia, Aria
- Abstract
A multiple-input multiple-output (MIMO) version of the dirty paper channel is studied, where the channel input and the dirt experience the same fading process, and the fading channel state is known at the receiver. This represents settings where signal and interference sources are co-located, such as in the broadcast channel. First, a variant of Costa’s dirty paper coding is presented, whose achievable rates are within a constant gap to capacity for all signal and dirt powers. In addition, a lattice coding and decoding scheme is proposed, whose decision regions are independent of the channel realizations. Under Rayleigh fading, the gap to capacity of the lattice coding scheme vanishes with the number of receive antennas, even at finite Signal-to-Noise Ratio (SNR). Thus, although the capacity of the fading dirty paper channel remains unknown, this paper shows it is not far from its dirt-free counterpart. The insights from the dirty paper channel directly lead to transmission strategies for the two-user MIMO broadcast channel, where the transmitter emits a superposition of desired and undesired (dirt) signals with respect to each receiver. The performance of the lattice coding scheme is analyzed under different fading dynamics for the two users, showing that high-dimensional lattices achieve rates close to capacity. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
16. Dirty Paper Coding Based on Polar Codes and Probabilistic Shaping.
- Author
-
Sener, M. Yusuf, Bohnke, Ronald, Xu, Wen, and Kramer, Gerhard
- Abstract
A precoding technique based on polar codes and probabilistic shaping is introduced for dirty paper coding. Two variants of the precoding use multi-level shaping and sign-bit shaping in one dimension. The decoder uses multi-stage successive-cancellation list decoding with list-passing across the bit levels. The approach achieves approximately the same frame error rates as polar codes with multi-level shaping over standard additive white Gaussian noise channels at a block length of 256 symbols and with different amplitude shift keying (ASK) constellations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Practical Dirty Paper Coding With Sum Codes.
- Author
-
Rege, Kiran M., Balachandran, Krishna, Kang, Joseph H., and Kemal Karakayali, M.
- Subjects
- *
CHANNEL coding , *SIGNAL quantization , *CONSTELLATION diagrams (Signal processing) , *DECODING algorithms , *INTERFERENCE (Telecommunication) - Abstract
In this paper, we present a practical method to construct dirty paper coding (DPC) schemes using sum codes. Unlike the commonly used approach to DPC where the coding scheme involves concatenation of a channel code and a quantization code, the proposed method embodies a unified approach that emulates the binning method used in the proof of the DPC result. Auxiliary bits are used to create the desired number of code vectors in each bin. Sum codes are obtained when information sequences augmented with auxiliary bits are encoded using linear block codes. Sum-code-based DPC schemes can be implemented using any linear block code, and entail a relatively small increase in decoder complexity when compared to standard communication schemes. They can also lead to significant reduction in transmit power in comparison to standard schemes. [ABSTRACT FROM AUTHOR]
- Published
- 2016
- Full Text
- View/download PDF
18. On the Dispersions of the Gel’fand–Pinsker Channel and Dirty Paper Coding.
- Author
-
Scarlett, Jonathan
- Subjects
- *
ERROR probability , *GAUSSIAN channels , *RANDOM noise theory , *DISPERSIVE channels (Telecommunication) , *CHANNEL coding - Abstract
This paper studies the second-order coding rates for memoryless channels with a state sequence known non-causally at the encoder. In the case of finite alphabets, an achievability result is obtained using constant-composition random coding, and by using a small fraction of the block to transmit the empirical distribution of the state sequence. For error probabilities less than 0.5, it is shown that the second-order rate improves on an existing one based on independent and identically distributed random coding. In the Gaussian case (dirty paper coding) with an almost-sure power constraint, an achievability result is obtained using random coding over the surface of a sphere, and using a small fraction of the block to transmit a quantized description of the state power. It is shown that the second-order asymptotics are identical to the single-user Gaussian channel of the same input power without a state. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
19. A Unified Framework for One-Shot Achievability via the Poisson Matching Lemma.
- Author
-
Li, Cheuk Ting and Anantharam, Venkat
- Subjects
BROADCAST channels ,SOURCE code ,LOSSY data compression ,CHANNEL coding ,INFORMATION theory ,INFORMATION networks ,PAPER arts - Abstract
We introduce a fundamental lemma called the Poisson matching lemma, and apply it to prove one-shot achievability results for various settings, namely channels with state information at the encoder, lossy source coding with side information at the decoder, joint source-channel coding, broadcast channels, distributed lossy source coding, multiple access channels and channel resolvability. Our one-shot bounds improve upon the best known one-shot bounds in most of the aforementioned settings (except multiple access channels and channel resolvability, where we recover bounds comparable to the best known bounds), with shorter proofs in some settings even when compared to the conventional asymptotic approach using typicality. The Poisson matching lemma replaces both the packing and covering lemmas, greatly simplifying the error analysis. This paper extends the work of Li and El Gamal on Poisson functional representation, which mainly considered variable-length source coding settings, whereas this paper studies fixed-length settings, and is not limited to source coding, showing that the Poisson functional representation is a viable alternative to typicality for most problems in network information theory. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
20. Am I, Me, and Who's She? Liberation Psychology, Historical Memory, and Muslim women.
- Author
-
Mohr, Sarah Huxtable
- Subjects
COLLECTIVE memory ,PSYCHOLOGY ,PATRIARCHY ,ISLAMOPHOBIA ,PAPER arts ,MUSLIM women ,MISOGYNY - Abstract
One of the central underpinnings of Islamophobia is the theoretical construction of Muslim women as "Other". Going hand in hand with colonization, the overall Orientalist imaginary has depicted Muslims as misogynistic, homophobic, and gynophobic in contrast to the normal and enlightened Western European subject. Liberation psychology, as a field of decolonial work, emphasizes several main tasks one of which is the recovery of historical memory in relation to how humans see each other and the world. This paper builds on the work of recovering historical memory to emphasize the Indo-European origins of misogyny and patriarchy and the subsequent cover-up of this history as a part of the legacy of colonialism and current narratives of Islamophobia. The paper concludes that the work of psychology should include decoding reality to uncover the true nature of the origins of patriarchy, thus building new, revitalized understandings of human society. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
21. Research on a soft saturation nonlinear SSVEP signal feature extraction algorithm.
- Author
-
Liu, Bo, Gao, Hongwei, Jiang, Yueqiu, and Wu, Jiaxuan
- Abstract
Brain–computer interfaces (BCIs) based on steady-state visual evoked potentials (SSVEP) have received widespread attention due to their high information transmission rate, high accuracy, and rich instruction set. However, the performance of its identification methods strongly depends on the amount of calibration data for within-subject classification. Some studies use deep learning (DL) algorithms for inter-subject classification, which can reduce the calculation process, but there is still much room for improvement in performance compared with intra-subject classification. To solve these problems, an efficient SSVEP signal recognition deep learning network model e-SSVEPNet based on the soft saturation nonlinear module is proposed in this paper. The soft saturation nonlinear module uses a similar exponential calculation method for output when it is less than zero, improving robustness to noise. Under the conditions of the SSVEP data set, two sliding time window lengths (1 s and 0.5 s), and three training data sizes, this paper evaluates the proposed network model and compares it with other traditional and deep learning model baseline methods. The experimental results of the nonlinear module were classified and compared. A large number of experimental results show that the proposed network has the highest average accuracy of intra-subject classification on the SSVEP data set, improves the performance of SSVEP signal classification and recognition, and has higher decoding accuracy under short signals, so it has huge potential ability to realize high-speed SSVEP-based for BCI. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. The Distortions Region of Broadcasting Correlated Gaussians and Asymmetric Data Transmission Over a Gaussian BC.
- Author
-
Bross, Shraga I.
- Subjects
DATA transmission systems ,DIGITAL communications ,GAUSSIAN channels ,BROADCAST channels ,ELECTRONIC paper ,VIDEO coding ,DIGITAL video broadcasting - Abstract
A memoryless bivariate Gaussian source is transmitted to a pair of receivers over an average-power limited bandwidth-matched Gaussian broadcast channel. Based on their observations, Receiver 1 reconstructs the first source component while Receiver 2 reconstructs the second source component both seeking to minimize the expected squared-error distortions. In addition to the source transmission digital information at a specified rate should be conveyed reliably to Receiver 1–the “stronger” receiver. Given the message rate we characterize the achievable distortions region. Specifically, there is an ${\sf SNR}$ -threshold below which Dirty Paper coding of the digital information against a linear combination of the source components is optimal. The threshold is a function of the digital information rate, the source correlation and the distortion at the “stronger” receiver. Above this threshold a Dirty Paper coding extension of the Tian-Diggavi-Shamai hybrid scheme is shown to be optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
23. Self-Powered Forward Error-Correcting Biosensor Based on Integration of Paper-Based Microfluidics and Self-Assembled Quick Response Codes.
- Author
-
Yuan, Mingquan, Liu, Keng-ku, Singamaneni, Srikanth, and Chakrabartty, Shantanu
- Abstract
This paper extends our previous work on silver-enhancement based self-assembling structures for designing reliable, self-powered biosensors with forward error correcting (FEC) capability. At the core of the proposed approach is the integration of paper-based microfluidics with quick response (QR) codes that can be optically scanned using a smart-phone. The scanned information is first decoded to obtain the location of a web-server which further processes the self-assembled QR image to determine the concentration of target analytes. The integration substrate for the proposed FEC biosensor is polyethylene and the patterning of the QR code on the substrate has been achieved using a combination of low-cost ink-jet printing and a regular ballpoint dispensing pen. A paper-based microfluidics channel has been integrated underneath the substrate for acquiring, mixing and flowing the sample to areas on the substrate where different parts of the code can self-assemble in presence of immobilized gold nanorods. In this paper we demonstrate the proof-of-concept detection using prototypes of QR encoded FEC biosensors. [ABSTRACT FROM PUBLISHER]
- Published
- 2016
- Full Text
- View/download PDF
24. I, NEURON: the neuron as the collective
- Author
-
Nizami, Lance
- Published
- 2017
- Full Text
- View/download PDF
25. Han–Kobayashi and Dirty-Paper Coding for Superchannel Optical Communications.
- Author
-
Koike-Akino, Toshiaki, Kojima, Keisuke, Millar, David S., Parsons, Kieran, Kametani, Soichiro, Sugihara, Takashi, Yoshida, Tsuyoshi, Ishida, Kazuyuki, Miyata, Yoshikuni, Matsumoto, Wataru, and Mizuochi, Takashi
- Abstract
Superchannel transmission is a candidate to realize Tb/s-class high-speed optical communications. In order to achieve higher spectrum efficiency, the channel spacing shall be as narrow as possible. However, densely allocated channels can cause non-negligible inter-channel interference (ICI) especially when the channel spacing is close to or below the Nyquist bandwidth. In this paper, we consider joint decoding to cancel the ICI in dense superchannel transmission. To further improve the spectrum efficiency, we propose the use of Han–Kobayashi superposition coding. In addition, for the case when neighboring subchannel transmitters can share data, we introduce dirty-paper coding for pre-cancelation of the ICI. We analytically evaluate the potential gains of these methods when ICI is present for sub-Nyquist channel spacing. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
26. Dirty Paper Coding for Gaussian Cognitive Z-Interference Channel: Performance Results.
- Author
-
Al-qudah, Zouhair and Rajan, Dinesh
- Abstract
In this paper, we present a practical application of dirty paper coding (DPC) for the Gaussian cognitive Z-interference channel. A two stage transmission scheme is proposed in which the cognitive transmitter first obtains the interference signal from the primary transmitter and then uses DPC to improve the performance of the cognitive link. Numerical results show that causal knowledge of the interference provides more than 3 dB improvement in performance in certain scenarios over a scheme that does not use interference cancellation. Results are also shown when the cognitive transmitter operates in both half-duplex and full-duplex modes. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
27. A dirty paper coding scheme for the Multiple Input Multiple Output Broadcast Channel.
- Author
-
Saradka, Balakrishna, Bhashyam, Srikrishna, and Thangaraj, Andrew
- Abstract
Dirty paper coding (DPC) is known to achieve the capacity region of a Gaussian Multiple Input Multiple Output-Broadcast channel (MIMO-BC). Practical DPC schemes using finite length codes are still being actively studied. In this paper, we design a zero-forcing DPC (ZF-DPC) scheme using trellis shaping and Low Density Parity Check (LDPC) codes for a MIMO-BC with two transmit antennas and two users (receivers), each with one antenna. This is an extension of an earlier design for the single antenna Gaussian broadcast channel. One of the important aspects of the DPC code design is the introduction of a one block delay that enables the channel encoder (and decoder) and the shaping encoder (and decoder) to operate independently. In the ZF-DPC method, the MIMO precoder ensures that one user has no interference. The other user uses DPC to combat interference. The performance of this method is compared using simulations with the capacity limit and simpler precoder based methods like Minimum Mean Square Error-Vector Perturbation (MMSE-VP) and zero-forcing beamforming (ZF-BF). [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
28. Localized Error Correction in Projective Space.
- Author
-
Cai, Ning
- Subjects
- *
PAPER arts , *ERRORS , *CODING theory , *DIMENSIONS - Abstract
In this paper, we extend the localized error correction code introduced by L. A. Bassalygo and coworkers from Hamming space to projective space. For constant dimensional localized error correction codes in projective space, we have a lower bound and an upper bound of the capacity, which are asymptotically tight when z< x\leq n-z\over 2, where x, z, and n are dimensions of codewords, error configurations, and the ground space, respectively. We determine the capacity of nonconstant dimensional localized error correction codes when z < n\over 3. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
29. On the Multiple-Access Channel With Common Rate-Limited Feedback.
- Author
-
Shaviv, Dor and Steinberg, Yossef
- Subjects
- *
ENCODING , *PAPER arts , *MATHEMATICS , *MARKOV processes , *GAUSSIAN channels - Abstract
This paper studies the multiple-access channel (MAC) with rate-limited feedback. The channel output is encoded into one stream of bits, which is provided causally to the two users at the channel input. An achievable rate region for this setup is derived, based on superposition of information, block Markov coding, and coding with various degrees of side information for the feedback link. The suggested region coincides with the Cover–Leung inner bound for large feedback rates. The result is then extended for cases where there is only a feedback link to one of the transmitters, and for a more general case where there are two separate feedback links to both transmitters. We compute achievable regions for the Gaussian MAC and for the binary erasure MAC. The Gaussian region is computed for the case of common rate-limited feedback, whereas the region for the binary erasure MAC is computed for one-sided feedback. It is known that for the latter, the Cover–Leung region is tight, and we obtain results that coincide with the feedback capacity region for high feedback rates. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
30. A Density Evolution Based Framework for Dirty-Paper Code Design Using TCQ and Multilevel LDPC Codes.
- Author
-
Yang, Xiong, Zixiang, Yu-chun, Wu, and Zhang, Philipp
- Abstract
We propose a density evolution based dirty-paper code design framework that combines trellis coded quantization with multi-level low-density parity-check (LDPC) codes. Unlike existing design techniques based on Gaussian approximation and EXIT charts, the proposed framework tracks the empirically collected log-likelihood ratio (LLR) distributions at each iteration, and employs density evolution and differential evolution algorithms to design each LDPC component code. The performance of the dirty-paper codes designed using the proposed method comes within 0.37 dB of the theoretical limit at 1 bit per sample transmission rate, achieving an 0.21 dB gain over the best known result. [ABSTRACT FROM PUBLISHER]
- Published
- 2012
- Full Text
- View/download PDF
31. Dictionary form in decoding, encoding and retention: Further insights.
- Author
-
DZIEMIANKO, ANNA
- Subjects
ELECTRONIC dictionaries ,FOREIGN language education ,PHONOLOGICAL encoding ,PHONOLOGICAL decoding ,ENCYCLOPEDIAS & dictionaries ,COLLOCATION (Linguistics) - Abstract
The aim of the paper is to investigate the role of dictionary form (paper versus electronic) in language reception, production and retention. The body of existing research does not give a clear answer as to which dictionary medium benefits users more. Divergent findings from many studies into the topic might stem from differences in research methodology (including the various tasks, participants and dictionaries used by different authors). Even a series of studies conducted by one researcher (Dziemianko, 2010, 2011, 2012b) leads to contradictory conclusions, possibly because of the use of paper and electronic versions of existing dictionaries, and the resulting problem with isolating dictionary form as a factor. To be able to argue with confidence that the results obtained follow from different dictionary formats, rather than presentation issues, research methodology should be improved. To successfully generalize about the significance of the medium for decoding, encoding and learning, the current study replicates previous research, but the presentation of lexicographic data on paper and on screen is now balanced, and the paper/electronic opposition is operationalized more appropriately. A real online dictionary and its paper-based counterpart composed of printouts of screen displays were used in the experiment in which the meaning of English nouns and phrases was explained, and collocations were completed with missing prepositions. A delayed post-test checked the retention of the meanings and collocations. The results indicate that dictionary medium does not play a statistically significant role in reception and production, but it considerably affects retention. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
32. Hill Matrix and Radix-64 Bit Algorithm to Preserve Data Confidentiality.
- Author
-
Arshad, Ali, Nadeem, Muhammad, Riaz, Saman, Zahra, Syeda Wajiha, Dutta, Ashit Kumar, Alzaid, Zaid, Alabdan, Rana, Almutairi, Badr, and Almotairi, Sultan
- Subjects
DATA encryption ,DATA security ,DATA protection ,ALGORITHMS ,CONFIDENTIAL communications - Abstract
There are many cloud data security techniques and algorithms available that can be used to detect attacks on cloud data, but these techniques and algorithms cannot be used to protect data from an attacker. Cloud cryptography is the best way to transmit data in a secure and reliable format. Various researchers have developed various mechanisms to transfer data securely, which can convert data from readable to unreadable, but these algorithms are not sufficient to provide complete data security. Each algorithm has some data security issues. If some effective data protection techniques are used, the attacker will not be able to decipher the encrypted data, and even if the attacker tries to tamper with the data, the attacker will not have access to the original data. In this paper, various data security techniques are developed, which can be used to protect the data from attackers completely. First, a customized American Standard Code for Information Interchange (ASCII) table is developed. The value of each Index is defined in a customized ASCII table. When an attacker tries to decrypt the data, the attacker always tries to apply the predefined ASCII table on the Ciphertext, which in a way, can be helpful for the attacker to decrypt the data. After that, a radix 64-bit encryption mechanism is used, with the help of which the number of cipher data is doubled from the original data. When the number of cipher values is double the original data, the attacker tries to decrypt each value. Instead of getting the original data, the attacker gets such data that has no relation to the original data. After that, a Hill Matrix algorithm is created, with the help of which a key is generated that is used in the exact plain text for which it is created, and this Key cannot be used in any other plain text. The boundaries of each Hill text work up to that text. The techniques used in this paper are compared with those used in various papers and discussed that how far the current algorithm is better than all other algorithms. Then, the Kasiski test is used to verify the validity of the proposed algorithm and found that, if the proposed algorithm is used for data encryption, so an attacker cannot break the proposed algorithm security using any technique or algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
33. Practical Dirty Paper Coding Schemes Using One Error Correction Code With Syndrome.
- Author
-
Kim, Taehyun, Kwon, Kyunghoon, and Heo, Jun
- Abstract
Dirty paper coding (DPC) offers an information-theoretic result for pre-cancellation of known interference at the transmitter. In this letter, we propose practical DPC schemes that use only one error correction code. Our designs focus on practical use from the viewpoint of complexity. For fair comparison with previous schemes, we compute the complexity of proposed schemes by the number of operations used. Simulation results show that compared to previous DPC schemes, the proposed schemes require lower transmission power to maintain the bit error rate to be within 10^-5 . [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
34. Fast Detection Fusion Network (FDFnet): An End to End Object Detection Framework Based on Heterogeneous Image Fusion for Power Facility Inspection.
- Author
-
Xu, Xiang, Liu, Gang, Bavirisetti, Durga Prasad, Zhang, Xiangbo, Sun, Boyang, and Xiao, Gang
- Subjects
OBJECT recognition (Computer vision) ,IMAGE fusion ,FEATURE extraction ,IMAGING systems ,COMPUTATIONAL complexity ,FACILITIES - Abstract
Visual surveillance for autonomous power facility inspection is considered to bethe prominent field of study in the power industry. This research field completely focuses on either object detection or image fusion which lacks the overall consideration. By considering this, a single end-to-end object detection method by incorporating the image fusion named Fast Detection Fusion Network (FDFNet) is proposed in this paper to output qualitative fused images with detection results. The parameters in the FDFnet are greatly reduced by sharing the feature extraction network between image fusion and object detection tasks, due to which a huge reduction in computational complexity is achieved. On this basis, the object detection algorithm performance on various types of power facility images is compared and analyzed. For experimentation purposes, an IR (infrared) and VIS (visible) image acquisition system has also been designed. In addition, a dataset named CVPower with different sets of images for power facility fusion detection is constructed for this research field. Experimental results demonstrate that the proposed method can achieve the mAP of not less than 70%, process 2 frames per second, and produce high qualitative fused images. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
35. Why One and Two Do Not Make Three: Dictionary Form Revisited
- Author
-
Anna Dziemianko
- Subjects
paper dictionaries ,electronic dictionaries ,dictionary use ,encoding ,decoding ,retention ,research methods ,replication ,menus ,highlighting ,noise ,access ,entry length ,Philology. Linguistics ,P1-1091 ,Languages and literature of Eastern Asia, Africa, Oceania ,PL1-8844 ,Germanic languages. Scandinavian languages ,PD1-7159 - Abstract
The primary aim of the article is to compare the usefulness of paper and electronic versions of OALDCE7 (Wehmeier 2005) for language encoding, decoding and learning. It is explained why, in contrast to Dziemianko's (2010) findings concerning COBUILD6 (Sinclair 2008), but in keeping with her observations (Dziemianko 2011) with regard to LDOCE5 (Mayor 2009), the e-version of OALDCE7 proved to be no better for language reception, production and learning than the dictionary in book form. An attempt is made to pinpoint the micro- and macrostructural design features which make e-COBUILD6 a better learning tool than e-OALDCE7 and e-LDOCE5. Recommendations concerning further research into the significance of the medium (paper vs. electronic) in the process of dictionary use conclude the study. The secondary aim which the paper attempts to achieve is to present the status of replication as a scientific research method and justify its use in lexicography.
- Published
- 2012
- Full Text
- View/download PDF
36. When network coding and dirty paper coding meet in a cooperative ad hoc network.
- Author
-
Fawaz, N., Gesbert, D., and Debbah, M.
- Abstract
We develop and analyze new cooperative strategies for ad hoc networks that are more spectrally efficient than classical decode & forward (DF) protocols. Using analog network coding, our strategies preserve the practical half-duplex assumption but relax the orthogonality constraint. The introduction of interference due to non-orthogonality is mitigated thanks to precoding, in particular dirty paper coding. Combined with smart power allocation, our cooperation strategies allow to save time and lead to a more efficient use of the bandwidth and to improved network throughput with respect to classical repetition-DF and parallel-DF. [ABSTRACT FROM PUBLISHER]
- Published
- 2008
- Full Text
- View/download PDF
37. Variable-Length Coding With Shared Incremental Redundancy: Design Methods and Examples.
- Author
-
Wang, Haobo, Ranganathan, Sudarsan V. S., and Wesel, Richard D.
- Subjects
VIDEO coding ,LOW density parity check codes ,PARALLEL processing ,REDUNDANCY in engineering - Abstract
Variable-length (VL) coding with feedback is a commonly used technique that can approach point-to-point Shannon channel capacity with a significantly shorter average codeword length than fixed-length coding without feedback. This paper uses the inter-frame coding of Zeineddine and Mansour, originally introduced to address varying channel-state conditions in broadcast wireless communication, to approach capacity on point-to-point channels using VL codes without feedback. The per-symbol complexity is comparable to decoding the VL code with feedback (plus the additional complexity of a small peeling decoder amortized over many VL codes) and presents the opportunity for encoders and decoders that utilize massive parallel processing, where each VL decoder can process simultaneously. This paper provides an analytical framework and a design process for the degree distribution of the inter-frame code that allows the feedback-free system to achieve 96% or more of the throughput of the original VL code with feedback. As examples of VL codes, we consider non-binary (NB) low-density parity-check (LDPC), binary LDPC, and convolutional VL codes. The NB-LDPC VL code with an 8-bit CRC and an average codeword length of 336 bits achieves 85% of capacity with four rounds of ACK/NACK feedback. The proposed scheme using shared incremental redundancy without feedback achieves 97% of that performance or 83% of the channel capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
38. Overview and Efficiency of Decoder-Side Depth Estimation in MPEG Immersive Video.
- Author
-
Mieloch, Dawid, Garus, Patrick, Milovanovic, Marta, Jung, Joel, Jeong, Jun Young, Ravi, Smitha Lingadahalli, and Salahieh, Basel
- Subjects
VIDEO coding ,MACHINE learning ,VIDEOS ,VIDEO codecs ,BINARY sequences ,VIDEO processing - Abstract
This paper presents the overview and rationale behind the Decoder-Side Depth Estimation (DSDE) mode of the MPEG Immersive Video (MIV) standard, using the Geometry Absent profile, for efficient compression of immersive multiview video. A MIV bitstream generated by an encoder operating in the DSDE mode does not include depth maps. It only contains the information required to reconstruct them in the client or in the cloud: decoded views and metadata. The paper explains the technical details and techniques supported by this novel MIV DSDE mode. The description additionally includes the specification on Geometry Assistance Supplemental Enhancement Information which helps to reduce the complexity of depth estimation, when performed in the cloud or at the decoder side. The depth estimation in MIV is a non-normative part of the decoding process, therefore, any method can be used to compute the depth maps. This paper lists a set of requirements for depth estimation, induced by the specific characteristics of the DSDE. The depth estimation reference software, continuously and collaboratively developed with MIV to meet these requirements, is presented in this paper. Several original experimental results are presented. The efficiency of the DSDE is compared to two MIV profiles. The combined non-transmission of depth maps and efficient coding of textures enabled by the DSDE leads to efficient compression and rendering quality improvement compared to the usual encoder-side depth estimation. Moreover, results of the first evaluation of state-of-the-art multiview depth estimators in the DSDE context, including machine learning techniques, are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
39. On Polar Coding for Side Information Channels.
- Author
-
Beilin, Barak and Burshtein, David
- Subjects
- *
CHANNEL coding , *SOURCE code , *BINARY codes - Abstract
We propose a successive cancellation list (SCL) encoding and decoding scheme for the Gelfand Pinsker (GP) problem based on the known nested polar coding scheme. It applies SCL encoding for the source coding part, and SCL decoding with a properly defined CRC for the channel coding part. The scheme shows improved performance compared to the existing method. A known issue with nested polar codes for binary dirty paper is the existence of frozen channel code bits that are not frozen in the source code. These bits need to be retransmitted in a second phase of the scheme, thus reducing the rate and increasing the required blocklength. We provide an improved bound on the size of this set, and on its scaling with respect to the blocklength, when the Bhattacharyya parameter of the test channel used for source coding is sufficiently large, or the Bhattacharyya parameter of the channel seen at the decoder is sufficiently small. The result is formulated for an arbitrary binary-input memoryless GP problem, since unlike the previous results, it does not require degradedness of the two channels mentioned above. Finally, we present simulation results for binary dirty paper and noisy write once memory codes. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
40. Decoding of MDP Convolutional Codes over the Erasure Channel under Linear Systems Point of View.
- Author
-
García-Planas, Maria Isabel and Um, Laurence E.
- Subjects
LINEAR dynamical systems ,BLOCK codes ,TIME complexity ,SYSTEMS theory ,LINEAR systems ,CONTROLLABILITY in systems engineering - Abstract
This paper attempts to highlight the decoding capabilities of MDP convolutional codes over the erasure channel by defining them as discrete linear dynamical systems, with which the controllability property and the observability characteristics of linear system theory can be applied, in particular those of output observability, easily described using matrix language. Those are viewed against the decoding capabilities of MDS block codes over the same channel. Not only is the time complexity better but the decoding capabilities are also increased with this approach because convolutional codes are more flexible in handling variable-length data streams than block codes, where they are fixed-length and less adaptable to varying data lengths without padding or other adjustments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. The Effect of Local Decodability Constraints on Variable-Length Compression.
- Author
-
Pananjady, Ashwin and Courtade, Thomas A.
- Subjects
CODING theory ,BINARY sequences ,TELECOMMUNICATION ,COMPUTATIONAL complexity ,DATA structures - Abstract
We consider a variable-length source coding problem subject to local decodability constraints. In particular, we investigate the blocklength scaling behavior attainable by encodings of $r$ -sparse binary sequences, under the constraint that any source bit can be correctly decoded upon probing at most $d$ codeword bits. We consider both adaptive and non-adaptive access models, and derive upper and lower bounds that often coincide up to constant factors. Such a characterization for the fixed-blocklength analog of our problem, known as the bit probe complexity of static membership, remains unknown despite considerable attention from researchers over the last few decades. We also show that locally decodable schemes for sparse sequences are able to decode 0s (frequent source symbols) of the source with far fewer probes on average than they can decode 1s (infrequent source symbols), thus rigorizing the notion that infrequent symbols require high probe complexity, even on average. Connections to the fixed-blocklength model and to communication complexity are also briefly discussed. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
42. Source Coding When the Side Information May Be Delayed.
- Author
-
Simeone, Osvaldo and Permuter, Haim Henri
- Subjects
MARKOV processes ,MEMORY ,PAPER arts ,CHANNEL coding ,INFORMATION theory - Abstract
For memoryless sources, delayed side information at the decoder does not improve the rate-distortion function. However, this is not the case for sources with memory, as demonstrated by a number of works focusing on the special case of (delayed) feedforward. In this paper, a setting is studied in which the encoder is potentially uncertain about the delay with which measurements of the side information, which is available at the encoder, are acquired at the decoder. Assuming a hidden Markov model for the source sequences, at first, a single-letter characterization is given for the setup where the side information delay is arbitrary and known at the encoder, and the reconstruction at the destination is required to be asymptotically lossless. Then, with delay equal to zero or one source symbol, a single-letter characterization of the rate-distortion region is given for the case where, unbeknownst to the encoder, the side information may be delayed or not. Finally, examples for binary and Gaussian sources are provided. [ABSTRACT FROM PUBLISHER]
- Published
- 2013
- Full Text
- View/download PDF
43. A Texture-Hidden Anti-Counterfeiting QR Code and Authentication Method.
- Author
-
Wang, Tianyu, Zheng, Hong, You, Changhui, and Ju, Jianping
- Subjects
TWO-dimensional bar codes ,GAUSSIAN distribution - Abstract
This paper designs a texture-hidden QR code to prevent the illegal copying of a QR code due to its lack of anti-counterfeiting ability. Combining random texture patterns and a refined QR code, the code is not only capable of regular coding but also has a strong anti-copying capability. Based on the proposed code, a quality assessment algorithm (MAF) and a dual feature detection algorithm (DFDA) are also proposed. The MAF is compared with several current algorithms without reference and achieves a 95% and 96% accuracy for blur type and blur degree, respectively. The DFDA is compared with various texture and corner methods and achieves an accuracy, precision, and recall of up to 100%, and also performs well on attacked datasets with reduction and cut. Experiments on self-built datasets show that the code designed in this paper has excellent feasibility and anti-counterfeiting performance. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Adaptive Graph Auto-Encoder for General Data Clustering.
- Author
-
Li, Xuelong, Zhang, Hongyuan, and Zhang, Rui
- Subjects
WEIGHTED graphs ,TASK analysis ,FUZZY clustering technique ,TANNER graphs - Abstract
Graph-based clustering plays an important role in the clustering area. Recent studies about graph neural networks (GNN) have achieved impressive success on graph-type data. However, in general clustering tasks, the graph structure of data does not exist such that GNN can not be applied to clustering directly and the strategy to construct a graph is crucial for performance. Therefore, how to extend GNN into general clustering tasks is an attractive problem. In this paper, we propose a graph auto-encoder for general data clustering, AdaGAE, which constructs the graph adaptively according to the generative perspective of graphs. The adaptive process is designed to induce the model to exploit the high-level information behind data and utilize the non-euclidean structure sufficiently. Importantly, we find that the simple update of the graph will result in severe degeneration, which can be concluded as better reconstruction means worse update. We provide rigorous analysis theoretically and empirically. Then we further design a novel mechanism to avoid the collapse. Via extending the generative graph models to general type data, a graph auto-encoder with a novel decoder is devised and the weighted graphs can be also applied to GNN. AdaGAE performs well and stably in different scale and type datasets. Besides, it is insensitive to the initialization of parameters and requires no pretraining. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
45. Data Dissemination Using Instantly Decodable Binary Codes in Fog-Radio Access Networks.
- Author
-
Douik, Ahmed and Sorour, Sameh
- Subjects
SELECTIVE dissemination of information ,BINARY codes ,RADIO access networks ,MOBILE communication systems ,LINEAR network coding ,DECODING algorithms - Abstract
This paper considers a device-to-device (D2D) fog-radio access network wherein a set of users are required to store/receive a set of files. The D2D devices are connected to a subset of the cloud data centers and thus possess a subset of the data. This paper is interested in reducing the total time of communication, i.e., the completion time, required to disseminate all files among all devices using instantly decodable network coding (IDNC). Unlike previous studies that assume a fully connected communication network, this paper tackles the more realistic scenario of a partially connected network in which devices are not all in the transmission range of one another. The joint optimization of selecting the transmitting device(s) and the file combination(s) is first formulated, and its intractability is exhibited. The completion time is approximated using the celebrated decoding delay approach by deriving the relationship between the quantities in a partially connected network. The paper introduces the cooperation graph and demonstrates that the problem is equivalent to a maximum weight clique problem over the newly designed graph. Extensive simulations reveal that the proposed solution provides noticeable performance enhancement and outperforms previously proposed IDNC-based schemes. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
46. Securing Files Using Hybrid Cryptography.
- Author
-
Padmavathi, V., Kumar, M. Jayanth, and Reddy, M. Saikiran
- Subjects
DATA security ,PUBLIC key cryptography ,DATA encryption ,CRYPTOGRAPHY ,CONFIDENTIAL communications - Abstract
Data Security in terms of file encryption is very crucial, every file nowadays is important as it contains either personal information or industrial data. Hybrid encryption is achieved through data transfer using a combination method of symmetric and asymmetric encryption algorithms. Users can safely send files through hybrid encryption. Asymmetric encryption slows down the encryption process when used alone so symmetric encryption is used for file encryption, advantages of both forms of encryption are utilized. The result is another layer of security with no extra burden to system performance. The idea of hybrid encryption is very simple. Instead of using just AES only to encrypt the file, we use AES to encrypt the file. Then to maintain the secrecy of the key, we encrypt the key using RSA. This discussed paper is a completely different approach which is used for securely storing files. This proposed scheme will also make sure the newly proposed model to have confidentiality and integrity mechanisms. [ABSTRACT FROM AUTHOR]
- Published
- 2021
47. Fundamental Limits of Distributed Linear Encoding.
- Author
-
Khooshemehr, Nastaran Abadi and Maddah-Ali, Mohammad Ali
- Subjects
CODING theory ,FINITE fields ,ENCODING ,LINEAR systems ,CHANNEL coding ,LINEAR codes - Abstract
In general coding theory, we often assume that error is observed in transferring or storing encoded symbols, while the process of encoding itself is error-free. Motivated by recent applications of coding theory, in this paper, we consider the case where the process of encoding is distributed and prone to error. We introduce the problem of distributed encoding, comprised of a set of $K \in \mathbb {N}$ isolated source nodes and $N \in \mathbb {N}$ encoding nodes. Each source node has one symbol from a finite field, which is sent to each of the encoding nodes. Each encoding node stores an encoded symbol from the same field, as a function of the received symbols. However, some of the source nodes are controlled by the adversary and may send different symbols to different encoding nodes. Depending on the number of the adversarial nodes, denoted by $\beta \in \mathbb {N}$ , and the cardinality of the set of symbols that each one generates, denoted by $v \in \mathbb {N}$ , the process of decoding from the encoded symbols could be impossible. Assume that a decoder connects to an arbitrary subset of $t \in \mathbb {N}$ encoding nodes and wants to decode the symbols of the honest nodes correctly, without necessarily identifying the sets of honest and adversarial nodes. An important characteristic of a distributed encoding system is $t^{*} \in \mathbb {N}$ , the minimum of such $t$ , which is a function of $K$ , $N$ , $\beta $ , and $v$. In this paper, we study the distributed linear encoding system, i.e. one in which the encoding nodes use linear coding. We show that $t^{*}_{\textrm {Linear}}=K+2\beta (v-1)$ , if $N\ge K+2\beta (v-1)$ , and $t^{*}_{\textrm {Linear}}=N$ , if $N\le K+2\beta (v-1)$. In order to achieve $t^{*}_{\textrm {Linear}}$ , we use random linear coding and show that in any feasible solution that the decoder finds, the messages of the honest nodes are decoded correctly. In order to prove the converse of the fundamental limit, we show that when the adversary behaves in a particular way, it can always confuse the decoder between two feasible solutions that differ in the message of at least one honest node. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
48. Decoding, reading and writing: the double helix theory of teaching.
- Author
-
Wyse, Dominic and Hacking, Charlotte
- Abstract
This paper presents a new theory and model of the teaching of decoding, reading and writing. The first part of the paper reviews a selection of influential models of learning to read and write that to varying degrees have been used as the basis for approaches to teaching, including the
Simple View of Reading . As well as noting some strengths of the models in relation to children's learning, limitations are identified in terms of their applicability as models of teaching. The second part of the paper presents seven components that are central to teaching reading and writing derived from social, cultural and cognitive research and theory. Explanations for the relevance of the components are offered, and seminal and more recent research that underpin them summarised. The final part of the paper introduces a new theory and model of teaching,The Double Helix of Reading and Writing . It is argued that this model provides a rationale for a balanced approach to teaching, and an alternative to synthetic phonics. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
49. Decoding kinematic information from beta-band motor rhythms of speech motor cortex: a methodological/analytic approach using concurrent speech movement tracking and magnetoencephalography.
- Author
-
Anastasopoulou, Ioanna, Cheyne, Douglas Owen, van Lieshout, Pascal, and Johnson, Blake Warren
- Subjects
MOTOR cortex ,SPEECH ,SENSORIMOTOR cortex ,MAGNETOENCEPHALOGRAPHY ,SCANNING systems ,ROBUST control ,TRANSCRANIAL magnetic stimulation ,NEUROBIOLOGY - Abstract
Introduction: Articulography and functional neuroimaging are two major tools for studying the neurobiology of speech production. Until now, however, it has generally not been feasible to use both in the same experimental setup because of technical incompatibilities between the two methodologies. Methods: Here we describe results from a novel articulography system dubbed Magneto-articulography for the Assessment of Speech Kinematics (MASK), which is technically compatible with magnetoencephalography (MEG) brain scanning systems. In the present paper we describe our methodological and analytic approach for extracting brain motor activities related to key kinematic and coordination event parameters derived from time-registered MASK tracking measurements. Data were collected from 10 healthy adults with tracking coils on the tongue, lips, and jaw. Analyses targeted the gestural landmarks of reiterated utterances/ipa/and/api/, produced at normal and faster rates. Results: The results show that (1) Speech sensorimotor cortex can be reliably located in peri-rolandic regions of the left hemisphere; (2) mu (8-12 Hz) and beta band (13-30 Hz) neuromotor oscillations are present in the speech signals and contain information structures that are independent of those present in higherfrequency bands; and (3) hypotheses concerning the information content of speech motor rhythms can be systematically evaluated with multivariate pattern analytic techniques. Discussion: These results show that MASK provides the capability, for deriving subject-specific articulatory parameters, based on well-established and robust motor control parameters, in the same experimental setup as the MEG brain recordings and in temporal and spatial co-register with the brain data. The analytic approach described here provides new capabilities for testing hypotheses concerning the types of kinematic information that are encoded and processed within specific components of the speech neuromotor system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. 基于LabVIEW的应答器报文解码仿真实现.
- Author
-
叶 轲, 侯 艳, and 王 通
- Abstract
Copyright of Railway Signalling & Communication Engineering is the property of Railway Signalling & Communication Engineering and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.