8,038 results
Search Results
2. Torn-Paper Coding.
- Author
-
Shomorony, Ilan and Vahid, Alireza
- Subjects
- *
SEQUENTIAL analysis , *DATA warehousing - Abstract
We consider the problem of communicating over a channel that randomly “tears” the message block into small pieces of different sizes and shuffles them. For the binary torn-paper channel with block length $n$ and pieces of length ${\mathrm{ Geometric}}(p_{n})$ , we characterize the capacity as $C = e^{-\alpha }$ , where $\alpha = \lim _{n\to \infty } p_{n} \log n$. Our results show that the case of ${\mathrm{ Geometric}}(p_{n})$ -length fragments and the case of deterministic length- $(1/p_{n})$ fragments are qualitatively different and, surprisingly, the capacity of the former is larger. Intuitively, this is due to the fact that, in the random fragments case, large fragments are sometimes observed, which boosts the capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
3. Paper Bodies: Data and Embodiment in the Sisterhood of Slade's Commonplace Books.
- Author
-
Hess, Jillian M.
- Subjects
- *
COMMONPLACE-books , *ROMANTICISM , *ENCODING , *PARATEXT , *ARCHIVES - Abstract
The article introduces a small bit of data for Romanticists' consideration, a collection of seventeen commonplace books kept from 1814 to 1817. It explores how Mary and Sarah Leigh and their cousin Maria Leigh used their commonplace books as archives of shared intimacy. The strategies the Leigh sisters used to encode embodied data include linking immaterial ideas with the materiality of the notebook and paratext that teaches how to read the verse in the context of the sisters' lived experience.
- Published
- 2022
- Full Text
- View/download PDF
4. A commentary on the NIMA paper by J. Brennan et al. on the demonstration of two-dimensional time encoded imaging of fast neutrons.
- Author
-
Wehe, David
- Subjects
- *
FAST neutrons , *ENCODING , *ARMS control - Published
- 2024
- Full Text
- View/download PDF
5. On the Capacity of the Carbon Copy onto Dirty Paper Channel.
- Author
-
Rini, Stefano and Shamai Shitz, Shlomo
- Subjects
- *
RADIO transmitter fading , *TRANSMITTERS (Communication) , *QUASISTATIC processes , *RANDOM noise theory , *ENCODING - Abstract
The “carbon copy onto dirty paper” (CCDP) channel is the compound “writing on dirty paper” channel in which the channel output is obtained as the sum of the channel input, white Gaussian noise and a Gaussian state sequence randomly selected among a set possible realizations. The transmitter has non-causal knowledge of the set of possible state sequences but does not know which sequence is selected to produce the channel output. We study the capacity of the CCDP channel for two scenarios: 1) the state sequences are independent and identically distributed; and 2) the state sequences are scaled versions of the same sequence. In the first scenario, we show that a combination of superposition coding, time-sharing, and Gel’fand-Pinsker binning is sufficient to approach the capacity to within 3 bits per channel use for any number of possible state realizations. In the second scenario, we derive capacity to within 4 bits per channel use for the case of two possible state sequences. This result is extended to the CCDP channel with any number of possible state sequences under certain conditions on the scaling parameters, which we denote as “strong fading” regime. We conclude by providing some remarks on the capacity of the CCDP channel in which the state sequences have any jointly Gaussian distribution. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
6. Proper multi-layer coding in fading dirty-paper channel.
- Author
-
Hoseini, Sayed Ali Khodam and Akhlaghi, Soroush
- Subjects
- *
CHANNEL coding , *RADIO transmitter fading , *ADDITIVE white Gaussian noise channels , *RADIO transmitters & transmission , *ENCODING - Abstract
This study investigates multi-layer coding over a dirty-paper channel. First, it is demonstrated that superposition coding in such channel still achieves the capacity of interference-free additive white Gaussian noise channel when the transmitter is non-causally aware of interference signal. Then, the problem is extended to the dirty-paper block fading channel, where it is shown that in the lack of channel information at the transmitter, the so-called broadcast approach maximises the average achievable rate of such channel. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
7. Ergodic Fading MIMO Dirty Paper and Broadcast Channels: Capacity Bounds and Lattice Strategies.
- Author
-
Hindy, Ahmed and Nosratinia, Aria
- Abstract
A multiple-input multiple-output (MIMO) version of the dirty paper channel is studied, where the channel input and the dirt experience the same fading process, and the fading channel state is known at the receiver. This represents settings where signal and interference sources are co-located, such as in the broadcast channel. First, a variant of Costa’s dirty paper coding is presented, whose achievable rates are within a constant gap to capacity for all signal and dirt powers. In addition, a lattice coding and decoding scheme is proposed, whose decision regions are independent of the channel realizations. Under Rayleigh fading, the gap to capacity of the lattice coding scheme vanishes with the number of receive antennas, even at finite Signal-to-Noise Ratio (SNR). Thus, although the capacity of the fading dirty paper channel remains unknown, this paper shows it is not far from its dirt-free counterpart. The insights from the dirty paper channel directly lead to transmission strategies for the two-user MIMO broadcast channel, where the transmitter emits a superposition of desired and undesired (dirt) signals with respect to each receiver. The performance of the lattice coding scheme is analyzed under different fading dynamics for the two users, showing that high-dimensional lattices achieve rates close to capacity. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
8. Dirty-Paper Coding Based Secure Transmission for Multiuser Downlink in Cellular Communication Systems.
- Author
-
Wang, Bo and Mu, Pengcheng
- Subjects
- *
MULTIUSER channels , *LINEAR network coding , *WIRELESS communications , *BROADCAST channels , *COVARIANCE matrices , *PROBABILITY theory - Abstract
This paper studies the secure transmission in a multiuser broadcast channel where only the statistical channel state information of the eavesdropper is available. We propose to apply secret dirty-paper coding (S-DPC) in this scenario to support the secure transmission of one user and the normal (unclassified) transmission of the other users. By adopting the S-DPC and encoding the secret message in the first place, all the information-bearing signals of the normal transmission are treated as noise by potential eavesdroppers and thus provide secrecy for the secure transmission. In this way, the proposed approach exploits the intrinsic secrecy of multiuser broadcasting and can serve as an energy-efficient alternative to the traditional artificial noise (AN) scheme. To evaluate the secrecy performance of this approach and compare it with the AN scheme, we propose two S-DPC-based secure transmission schemes for maximizing the secrecy rate under constraints on the secrecy outage probability (SOP) and the normal transmission rates. The first scheme directly optimizes the covariance matrices of the transmit signals, and a novel approximation of the intractable SOP constraint is derived to facilitate the optimization. The second scheme combines zero-forcing dirty-paper coding and AN, and the optimization involves only power allocation. We establish efficient numerical algorithms to solve the optimization problems for both schemes. Theoretical and simulation results confirm that, in addition to supporting the normal transmission, the achievable secrecy rates of the proposed schemes can be close to that of the traditional AN scheme, which supports only the secure transmission of one user. [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
9. Text/Conference Paper
- Author
-
Cayoglu, Ugur, Tristram, Frank, Meyer, Jörg, Kerzenmacher, Tobias, Braesicke, Peter, and Streit, Achim
- Subjects
Data_CODINGANDINFORMATIONTHEORY ,compression algorithms ,meteorology ,prediction-based compression ,encoding ,information spaces - Abstract
One of the scientific communities that generate the largest amounts of data today are the climate sciences. New climate models enable model integrations at unprecedented resolution, simulating timescales from decades to centuries of climate change. Nowadays, limited storage space and ever increasing model output is a big challenge. For this reason, we look at lossless compression using prediction-based data compression. We show that there is a significant dependence of the compression rate on the chosen traversal method and the underlying data model. We examine the influence of this structural dependency on prediction-based compression algorithms and explore possibilities to improve compression rates. We introduce the concept of Information Spaces (IS), which help to improve the accuracy of predictions by nearly 10% and decrease the standard deviation of the compression results by 20% on average.
- Published
- 2019
- Full Text
- View/download PDF
10. Dirty Paper Coding Based on Polar Codes and Probabilistic Shaping.
- Author
-
Sener, M. Yusuf, Bohnke, Ronald, Xu, Wen, and Kramer, Gerhard
- Abstract
A precoding technique based on polar codes and probabilistic shaping is introduced for dirty paper coding. Two variants of the precoding use multi-level shaping and sign-bit shaping in one dimension. The decoder uses multi-stage successive-cancellation list decoding with list-passing across the bit levels. The approach achieves approximately the same frame error rates as polar codes with multi-level shaping over standard additive white Gaussian noise channels at a block length of 256 symbols and with different amplitude shift keying (ASK) constellations. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
11. A Review of Affective Computing Research Based on Function-Component-Representation Framework.
- Author
-
Ma, Haiwei and Yarosh, Svetlana
- Abstract
Affective computing (AC), a field that bridges the gap between human affect and computational technology, has witnessed remarkable technical advancement. However, theoretical underpinnings of affective computing are rarely discussed and reviewed. This paper provides a thorough conceptual analysis of the literature to understand theoretical questions essential to affective computing and current answers. Inspired by emotion theories, we proposed the function-component-representation (FCR) framework to organize different conceptions of affect along three dimensions that each address an important question: function of affect (why compute affect), component of affect (how to compute affect), and representation of affect (what affect to compute). We coded each paper by its underlying conception of affect and found preferences towards affect detection, behavioral component, and categorical representation. We also observed coupling of certain conceptions. For example, papers using the behavioral component tend to adopt the categorical representation, whereas papers using the physiological component tend to adopt the dimensional representation. The FCR framework is not only the first attempt to organize different theoretical perspectives in a systematic and quantitative way, but also a blueprint to help conceptualize an AC project and pinpoint new possibilities. Future work may explore how the identified frequencies of FCR framework combinations may be applied in practice. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
12. On the Dispersions of the Gel’fand–Pinsker Channel and Dirty Paper Coding.
- Author
-
Scarlett, Jonathan
- Subjects
- *
ERROR probability , *GAUSSIAN channels , *RANDOM noise theory , *DISPERSIVE channels (Telecommunication) , *CHANNEL coding - Abstract
This paper studies the second-order coding rates for memoryless channels with a state sequence known non-causally at the encoder. In the case of finite alphabets, an achievability result is obtained using constant-composition random coding, and by using a small fraction of the block to transmit the empirical distribution of the state sequence. For error probabilities less than 0.5, it is shown that the second-order rate improves on an existing one based on independent and identically distributed random coding. In the Gaussian case (dirty paper coding) with an almost-sure power constraint, an achievability result is obtained using random coding over the surface of a sphere, and using a small fraction of the block to transmit a quantized description of the state power. It is shown that the second-order asymptotics are identical to the single-user Gaussian channel of the same input power without a state. [ABSTRACT FROM AUTHOR]
- Published
- 2015
- Full Text
- View/download PDF
13. Channel Capacity Analysis for Dirty Paper Coding With the Binary Codeword and Interference.
- Author
-
Xu, Zhengguang and Xie, Yongbiao
- Abstract
Dirty paper coding is an interference pre-cancellation method for known interference at the transmitter and serves as a basic building block in the digital watermarking system. In this letter, we investigate the dirty paper model in the simplest digital communication system, where both the codeword and the interference are binary. For watermark embedment, we derive the relevant coding, the constant coding, and the symmetric relevant coding when the encoder focuses on the binary codeword and interference. The channel capacity is analyzed and the optimal parameter is discussed in the case. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
14. Enhancing LS-PIE's Optimal Latent Dimensional Identification: Latent Expansion and Latent Condensation.
- Author
-
Stevens, Jesse, Wilke, Daniel N., and Setshedi, Isaac I.
- Subjects
SINGULAR value decomposition ,COMPACT spaces (Topology) ,LATENT variables ,PRINCIPAL components analysis ,CONDENSATION - Abstract
The Latent Space Perspicacity and Interpretation Enhancement (LS-PIE) framework enhances dimensionality reduction methods for linear latent variable models (LVMs). This paper extends LS-PIE by introducing an optimal latent discovery strategy to automate identifying optimal latent dimensions and projections based on user-defined metrics. The latent condensing (LCON) method clusters and condenses an extensive latent space into a compact form. A new approach, latent expansion (LEXP), incrementally increases latent dimensions using a linear LVM to find an optimal compact space. This study compares these methods across multiple datasets, including a simple toy problem, mixed signals, ECG data, and simulated vibrational data. LEXP can accelerate the discovery of optimal latent spaces and may yield different compact spaces from LCON, depending on the LVM. This paper highlights the LS-PIE algorithm's applications and compares LCON and LEXP in organising, ranking, and scoring latent components akin to principal component analysis or singular value decomposition. This paper shows clear improvements in the interpretability of the resulting latent representations allowing for clearer and more focused analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. The Distortions Region of Broadcasting Correlated Gaussians and Asymmetric Data Transmission Over a Gaussian BC.
- Author
-
Bross, Shraga I.
- Subjects
DATA transmission systems ,DIGITAL communications ,GAUSSIAN channels ,BROADCAST channels ,ELECTRONIC paper ,VIDEO coding ,DIGITAL video broadcasting - Abstract
A memoryless bivariate Gaussian source is transmitted to a pair of receivers over an average-power limited bandwidth-matched Gaussian broadcast channel. Based on their observations, Receiver 1 reconstructs the first source component while Receiver 2 reconstructs the second source component both seeking to minimize the expected squared-error distortions. In addition to the source transmission digital information at a specified rate should be conveyed reliably to Receiver 1–the “stronger” receiver. Given the message rate we characterize the achievable distortions region. Specifically, there is an ${\sf SNR}$ -threshold below which Dirty Paper coding of the digital information against a linear combination of the source components is optimal. The threshold is a function of the digital information rate, the source correlation and the distortion at the “stronger” receiver. Above this threshold a Dirty Paper coding extension of the Tian-Diggavi-Shamai hybrid scheme is shown to be optimal. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
16. Light drive reversible color switching for rewritable media and encoding.
- Author
-
Ren, Qiaoli, Aodeng, Gerile, Ga, Lu, and Ai, Jun
- Subjects
- *
ELECTRONIC paper , *REDUCING agents , *CATALYSTS , *ENCODING , *INDUSTRIAL costs , *COLOR - Abstract
[Display omitted] • Photoreversible color switching systems have been integrated by reducing agent of triethanolamine, catalyst of β-FeOOH nanorods. • With high switching rate, high reversibility (>10 cycle), the new system could be broad use in rewritable paper. • The rewritable paper is highly applicable as self-erasing rewritable media for printing. • The rewritable paper can be applied for a data encoding and reading strategy. Nowadays, photo reversible color switching systems (PCSS) are always limited by some requirements, such as good stability, low toxicity, fast light response, long cycling performance and low production cost, therefore, it is a huge challenge to develop such a system that integrates beneficial features. Herein, a new type of PCSS have been demonstrated, which integrated reducing agent of triethanolamine (TEOA), catalyst of β-FeOOH nanorods and the redox driven color conversion characteristics of redox dyes. The system has the advantages of high switching rate, high reversibility (>10 cycles), wavelength selective response, safety and less light damage, which can be widely used in rewritable paper. As-prepared rewritable paper has high contrast, high resolution, suitable printing time and good reversibility, which is in line with the environmental protection concept of green printing. Rewritable paper is a kind of self-erasable rewritable medium which is highly suitable for printing. Environmental protection film has the advantages of low cost, convenient preparation and recycling. It is expected to replace the traditional writing, printing paper, existing systems and make a big step forward to practical application. Even more surprising, it also can be applied for a data encoding and reading strategy. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
17. Near-Field Chipless-RFID System With Erasable/Programmable 40-bit Tags Inkjet Printed on Paper Substrates.
- Author
-
Herrojo, Cristian, Mata-Contreras, Javier, Paredes, Ferran, Nunez, Alba, Ramon, Eloi, and Martin, Ferran
- Abstract
In this letter, a chipless radio frequency identification (chipless-RFID) system with erasable/programmable 40-bit tags inkjet printed on paper substrates, where tag reading proceeds sequentially through near-field coupling, is presented for the first time. The tags consist of a linear chain of identical split ring resonators (SRRs) printed at predefined and equidistant positions on a paper substrate, and each resonant element provides a bit of information. Tag programming is achieved by cutting certain resonant elements, providing the logic state “0” to the corresponding bit. Conversely, tags can be erased (all bits set to “1”) by short circuiting those previously cut resonant elements through inkjet. An important feature of the proposed system is the fact that tag reading is possible either with the SRR chain faced up or faced down (with regard to the reader). To this end, two pairs of header bits (resonators), with different sequences, have been added at the beginning and at the end of the tag identification chain. Moreover, tag data storage capacity (number of bits) is only limited by the space occupied by the linear chain. The implementation of tags on paper substrates demonstrates the potential of the proposed chipless-RFID system in secure paper applications, where the necessary proximity between the reader and the tag, inherent to near-field reading, is not an issue. [ABSTRACT FROM PUBLISHER]
- Published
- 2018
- Full Text
- View/download PDF
18. HME-KG: A method of constructing the human motion encoding knowledge graph based on a hierarchical motion model.
- Author
-
Liu, Qi, Huang, Tianyu, and Li, Xiangchen
- Subjects
KNOWLEDGE graphs ,MOTION capture (Human mechanics) ,POSTURE ,ENCODING ,VISUALIZATION - Abstract
The diversity, infinity, and nonuniform description of human motion make it challenging for computers to understand human activities. To explore and reuse captured human motion data, this work defines a more comprehensive hierarchical theoretical model of human motion and proposes a standard human posture encoding scheme. We construct a domain knowledge graph (DKG) named the human motion encoding knowledge graph (HME-KG) based on posture codes and action labels. Community detection, similarity analysis, and centrality analysis are used to explore the potential value of motion data. This paper conducts an evaluation and visualization of HME-KG. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Han–Kobayashi and Dirty-Paper Coding for Superchannel Optical Communications.
- Author
-
Koike-Akino, Toshiaki, Kojima, Keisuke, Millar, David S., Parsons, Kieran, Kametani, Soichiro, Sugihara, Takashi, Yoshida, Tsuyoshi, Ishida, Kazuyuki, Miyata, Yoshikuni, Matsumoto, Wataru, and Mizuochi, Takashi
- Abstract
Superchannel transmission is a candidate to realize Tb/s-class high-speed optical communications. In order to achieve higher spectrum efficiency, the channel spacing shall be as narrow as possible. However, densely allocated channels can cause non-negligible inter-channel interference (ICI) especially when the channel spacing is close to or below the Nyquist bandwidth. In this paper, we consider joint decoding to cancel the ICI in dense superchannel transmission. To further improve the spectrum efficiency, we propose the use of Han–Kobayashi superposition coding. In addition, for the case when neighboring subchannel transmitters can share data, we introduce dirty-paper coding for pre-cancelation of the ICI. We analytically evaluate the potential gains of these methods when ICI is present for sub-Nyquist channel spacing. [ABSTRACT FROM PUBLISHER]
- Published
- 2015
- Full Text
- View/download PDF
20. Long Short-Term Memory-Based Non-Uniform Coding Transmission Strategy for a 360-Degree Video.
- Author
-
Guo, Jia, Li, Chengrui, Zhu, Jinqi, Li, Xiang, Gao, Qian, Chen, Yunhe, and Feng, Weijia
- Subjects
PREDICTION models ,TILES ,VIDEOS ,ALGORITHMS ,VIDEO coding ,ENCODING - Abstract
This paper studies an LSTM-based adaptive transmission method for a 360-degree video and proposes a non-uniform encoding transmission strategy based on LSTM. Our goal is to maximize the user's video experience by dynamically dividing the 360-degree video into tiles of different numbers and sizes, and selecting different bitrates for each tile. This aims to reduce buffering events and video jitter. To determine the optimal number and size of tiles at the current moment, we constructed a dual-layer stacked LSTM network model. This model predicts, in real-time, the number, size, and bitrate of the tiles needed for the next moment of the 360-degree video based on the distance between the user's eyes and the screen. In our experiments, we used an exhaustive algorithm to calculate the optimal tile division and bitrate selection scheme for a 360-degree video under different network conditions, and used this dataset to train our prediction model. Finally, by comparing with other advanced algorithms, we demonstrated the superiority of our proposed method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Fd-CasBGRel: A Joint Entity–Relationship Extraction Model for Aquatic Disease Domains.
- Author
-
Ye, Hongbao, Lv, Lijian, Zhou, Chengquan, and Sun, Dawei
- Subjects
KNOWLEDGE graphs ,CORPORA ,WEBSITES ,GENERALIZATION ,ENCODING - Abstract
Featured Application: The model is primarily utilized for the task of entity relationship extraction during the construction process of an aquatic disease knowledge graph. Entity–relationship extraction plays a pivotal role in the construction of domain knowledge graphs. For the aquatic disease domain, however, this relationship extraction is a formidable task because of overlapping relationships, data specialization, limited feature fusion, and imbalanced data samples, which significantly weaken the extraction's performance. To tackle these challenges, this study leverages published books and aquatic disease websites as data sources to compile a text corpus, establish datasets, and then propose the Fd-CasBGRel model specifically tailored to the aquatic disease domain. The model uses the Casrel cascading binary tagging framework to address relationship overlap; utilizes task fine-tuning for better performance on aquatic disease data; trains on specialized aquatic disease corpora to improve adaptability; and integrates the BRC feature fusion module—which incorporates self-attention mechanisms, BiLSTM, relative position encoding, and conditional layer normalization—to leverage entity position and context for enhanced fusion. Further, it replaces the traditional cross-entropy loss function with the GHM loss function to mitigate category imbalance issues. The experimental results indicate that the F1 score of the Fd-CasBGRel on the aquatic disease dataset reached 84.71%, significantly outperforming several benchmark models. This model effectively addresses the challenges of ternary extraction's low performance caused by high data specialization, insufficient feature integration, and data imbalances. The model achieved the highest F1 score of 86.52% on the overlapping relationship category dataset, demonstrating its robust capability in extracting overlapping data. Furthermore, We also conducted comparative experiments on the publicly available dataset WebNLG, and the model in this paper obtained the best performance metrics compared to the rest of the comparative models, indicating that the model has good generalization ability. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Binding in Najdi Arabic: Types of Reflexives, the Argument Structure of Reflexive Constructions and Possessive Reflexives.
- Author
-
Alowayed, Asma I. and Albaty, Yasser A.
- Subjects
ARGUMENT ,REFLEXIVITY ,ENCODING ,SYNTAX (Grammar) - Abstract
The present paper investigates reflexives in Najdi Arabic (NA). We start by examining how the encoding of reflexivity in NA can be attained lexically, morphologically, and syntactically. We also investigate the argument structure of reflexive constructions in NA in accordance with Reinhart and Siloni’s (2005) bundling approach. Finally, possessive reflexives and their cross-linguistic distribution with definiteness marking are examined, providing empirical coverage to this area in NA. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. 48‐1: Invited Paper: Holographic Display Based on Complex‐Amplitude Encoding with Phase‐Only SLMs.
- Author
-
Sui, Xiaomeng, Cao, Liangcai, and Jin, Guofan
- Subjects
HOLOGRAPHIC displays ,HOLOGRAPHY ,LIGHT filters ,IMAGE reconstruction ,DIGITAL holographic microscopy ,ENCODING - Abstract
Double‐phase holograms enable the holographic reconstructions with improved image quality but still suffer from the spatial shifting noises generated from the complex‐amplitude wavefront encoding. The band‐limited double‐phase method could suppress the spatial shifting noise by the band limitation. A multi‐plane complex‐amplitude holographic display is implemented based on band‐limited double‐phase hologram. High‐sharpness reconstructions free of spatial‐shifting noise are realized with the numerical band limitation and optical filtering. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
24. A multi-scale residual encoding network for concrete crack segmentation.
- Author
-
Liu, Die, Xu, MengDie, Li, ZhiTing, He, Yingying, Zheng, Long, Xue, Pengpeng, and Wu, Xiaodong
- Subjects
CRACKING of concrete ,LINEAR network coding ,SURFACE cracks ,ENCODING - Abstract
Concrete surface crack detection plays a crucial role in ensuring concrete safety. However, manual crack detection is time-consuming, necessitating the development of an automatic method to streamline the process. Nonetheless, detecting concrete cracks automatically remains challenging due to the heterogeneous strength of cracks and the complex background. To address this issue, we propose a multi-scale residual encoding network for concrete crack segmentation. This network leverages the U-NET basic network structure to merge feature maps from different levels into low-level features, thus enhancing the utilization of predicted feature maps. The primary contribution of this research is the enhancement of the U-NET coding network through the incorporation of a residual structure. This modification improves the coding network's ability to extract features related to small cracks. Furthermore, an attention mechanism is utilized within the network to enhance the perceptual field information of the crack feature map. The integration of this mechanism enhances the accuracy of crack detection across various scales. Furthermore, we introduce a specially designed loss function tailored to crack datasets to tackle the problem of imbalanced positive and negative samples in concrete crack images caused by data imbalance. This loss function helps improve the prediction accuracy of crack pixels. To demonstrate the superiority and universality of our proposed method, we conducted a comparative evaluation against state-of-the-art edge detection and semantic segmentation methods using a standardized evaluation approach. Experimental results on the SDNET2018 dataset demonstrate the effectiveness of our method, achieving mIOU, F1-score, Precision, and Recall scores of 0.862, 0.941, 0.945, and 0.9394, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Tutorial - Collaborative approaches to discourse: Music scholarship using performance recordings and Linked Data annotations
- Author
-
Lewis, David, Page, Kevin, VanderHart, Chanda, Weigl, David M., Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,and methods ,Digital Musicology ,Annotation ,Musicology ,Media studies ,Data Modelling ,annotation structures ,Music Performance ,encoding ,Computer science ,Pre-Conference Workshop and Tutorial ,Multimedia ,Humanities computing ,systems ,and analysis ,data modeling ,linked (open) data ,music and sound digitization - Abstract
Participants are invited to explore modelling and annotation through exercises demonstrating music research conducted in Oxford and Vienna. After hands-on ontology design exercises with pen and paper, they are introduced to cutting-edge digital tooling and led through research processes of an ongoing project investigating the Vienna Philharmonic's New Year's Concerts.
- Published
- 2023
- Full Text
- View/download PDF
26. Coding With Noiseless Feedback Over the Z-Channel.
- Author
-
Deppe, Christian, Lebedev, Vladimir, Maringer, Georg, and Polyanskii, Nikita
- Subjects
ERROR-correcting codes ,BOUND states ,PARALLEL algorithms - Abstract
In this paper, we consider encoding strategies for the Z-channel with noiseless feedback. We analyze the combinatorial setting where the maximum number of errors inflicted by an adversary is proportional to the number of transmissions, which goes to infinity. Without feedback, it is known that the rate of optimal asymmetric-error-correcting codes for the error fraction $\tau \ge 1/4$ vanishes as the blocklength grows. In this paper, we give an efficient feedback encoding scheme with $n$ transmissions that achieves a positive rate for any fraction of errors $\tau < 1$ and $n\to \infty $. Additionally, we state an upper bound on the rate of asymptotically long feedback asymmetric error-correcting codes. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
27. Rhythmic, Melodic and Vertical N-Gram Features as a Means of Studying Symbolic Music Computationally
- Author
-
McKay, Cory, Cumming, Julie, Fujinaga, Ichiro, Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,attribution studies and stylometric analysis ,Music theory ,representation ,Library & information science ,Musicology ,Statistics ,Automated analysis ,encoding ,N-grams, Music classification, jSymbolic ,Computer science ,manuscripts description ,Short Presentation ,Machine learning ,FOS: Mathematics ,and analysis ,artificial intelligence and machine learning ,Features ,music and sound digitization - Abstract
This presentation explores how n-grams can be used to automatically classify and learn about music. An overall discussion is provided of various ways in which n-grams can be adapted for use with digital scores, and of how musically meaningful features can be extracted from them. The jSymbolic 3.0 alpha prototype feature extractor is then used in three sets of music classification experiments investigating how n-gram features perform relative to and combined with other types of features extracted from symbolic music files., Funded by the FRQSC and SSHRC.
- Published
- 2023
- Full Text
- View/download PDF
28. DNA encoding schemes herald a new age in cybersecurity for safeguarding digital assets.
- Author
-
Aqeel, Sehrish, Khan, Sajid Ullah, Khan, Adnan Shahid, Alharbi, Meshal, Shah, Sajid, Affendi, Mohammed EL, and Ahmad, Naveed
- Subjects
ARTIFICIAL chromosomes ,DNA ,INTERNET security ,ENCODING ,ASSETS (Accounting) - Abstract
With the urge to secure and protect digital assets, there is a need to emphasize the immediacy of taking measures to ensure robust security due to the enhancement of cyber security. Different advanced methods, like encryption schemes, are vulnerable to putting constraints on attacks. To encode the digital data and utilize the unique properties of DNA, like stability and durability, synthetic DNA sequences are offered as a promising alternative by DNA encoding schemes. This study enlightens the exploration of DNA's potential for encoding in evolving cyber security. Based on the systematic literature review, this paper provides a discussion on the challenges, pros, and directions for future work. We analyzed the current trends and new innovations in methodology, security attacks, the implementation of tools, and different metrics to measure. Various tools, such as Mathematica, MATLAB, NIST test suite, and Coludsim, were employed to evaluate the performance of the proposed method and obtain results. By identifying the strengths and limitations of proposed methods, the study highlights research challenges and offers future scope for investigation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A clinical trial termination prediction model based on denoising autoencoder and deep survival regression.
- Author
-
Qi, Huamei, Yang, Wenhui, Zou, Wenqin, and Hu, Yuxuan
- Subjects
SIGNAL denoising ,PREDICTION models ,REGRESSION analysis ,ENCODING ,PREGNANT women - Abstract
Effective clinical trials are necessary for understanding medical advances but early termination of trials can result in unnecessary waste of resources. Survival models can be used to predict survival probabilities in such trials. However, survival data from clinical trials are sparse, and DeepSurv cannot accurately capture their effective features, making the models weak in generalization and decreasing their prediction accuracy. In this paper, we propose a survival prediction model for clinical trial completion based on the combination of denoising autoencoder (DAE) and DeepSurv models. The DAE is used to obtain a robust representation of features by breaking the loop of raw features after autoencoder training, and then the robust features are provided to DeepSurv as input for training. The clinical trial dataset for training the model was obtained from the ClinicalTrials.gov dataset. A study of clinical trial completion in pregnant women was conducted in response to the fact that many current clinical trials exclude pregnant women. The experimental results showed that the denoising autoencoder and deep survival regression (DAE‐DSR) model was able to extract meaningful and robust features for survival analysis; the C‐index of the training and test datasets were 0.74 and 0.75 respectively. Compared with the Cox proportional hazards model and DeepSurv model, the survival analysis curves obtained by using DAE‐DSR model had more prominent features, and the model was more robust and performed better in actual prediction. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Full-Process Adaptive Encoding and Decoding Framework for Remote Sensing Images Based on Compression Sensing.
- Author
-
Hu, Huiling, Liu, Chunyu, Liu, Shuai, Ying, Shipeng, Wang, Chen, and Ding, Yi
- Subjects
IMAGE compression ,REMOTE sensing ,COMPRESSED sensing ,IMAGE reconstruction ,ENCODING ,FEATURE extraction ,IMAGE segmentation - Abstract
Faced with the problem of incompatibility between traditional information acquisition mode and spaceborne earth observation tasks, starting from the general mathematical model of compressed sensing, a theoretical model of block compressed sensing was established, and a full-process adaptive coding and decoding compressed sensing framework for remote sensing images was proposed, which includes five parts: mode selection, feature factor extraction, adaptive shape segmentation, adaptive sampling rate allocation and image reconstruction. Unlike previous semi-adaptive or local adaptive methods, the advantages of the adaptive encoding and decoding method proposed in this paper are mainly reflected in four aspects: (1) Ability to select encoding modes based on image content, and maximizing the use of the richness of the image to select appropriate sampling methods; (2) Capable of utilizing image texture details for adaptive segmentation, effectively separating complex and smooth regions; (3) Being able to detect the sparsity of encoding blocks and adaptively allocate sampling rates to fully explore the compressibility of images; (4) The reconstruction matrix can be adaptively selected based on the size of the encoding block to alleviate block artifacts caused by non-stationary characteristics of the image. Experimental results show that the method proposed in this article has good stability for remote sensing images with complex edge textures, with the peak signal-to-noise ratio and structural similarity remaining above 35 dB and 0.8. Moreover, especially for ocean images with relatively simple image content, when the sampling rate is 0.26, the peak signal-to-noise ratio reaches 50.8 dB, and the structural similarity is 0.99. In addition, the recovered images have the smallest BRISQUE value, with better clarity and less distortion. In the subjective aspect, the reconstructed image has clear edge details and good reconstruction effect, while the block effect is effectively suppressed. The framework designed in this paper is superior to similar algorithms in both subjective visual and objective evaluation indexes, which is of great significance for alleviating the incompatibility between traditional information acquisition methods and satellite-borne earth observation missions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. FS-GDI Based Area Efficient Hamming (11, 7) Encoding.
- Author
-
El-Bendary, Mohsen A. M. and El-Badry, O.
- Subjects
HAMMING codes ,TELECOMMUNICATION systems ,ENCODING ,DELAY lines ,VIDEO coding ,DIGITAL signal processing ,TRANSISTORS - Abstract
This paper proposes an efficient design of Hamming (11, 7) encoder utilising Full Swing-Gate Diffusion Input (FS-GDI) approach in 65 nm technology nano-size node. The proposed design of Hamming codes aims to improve the power and area efficiency through reducing of transistors count by employing power-efficient logic style. Encoding circuits of Hamming code (11, 7) and (7, 4) are designed using the various traditional and proposed approaches. The amount of consumed power, delay time, Power Delay Product (PDP) and hardware simplicity are employed as a metrics for evaluating the efficiency of the proposed designs of encoding circuits. The simulation experiments are executed utilising Cadence Virtuoso simulator package. These experiments revealed that the proposed designs of Hamming encoding circuits achieve delay time reduction by 50.91% and 20% for Hamming codes (7, 4) and (11, 7), respectively. Also, hardware (H/W) simplicity and area efficiency of the circuits are improved by 50% compared to CMOS-based circuits. From the results analysis, the proposed FS-GDI based Hamming encoding circuits achieve efficient power and delay optimising. Hence, the power consumption, delay and area in communications systems and DSP circuits due to encoding process are reduced. The whole performance of DSP circuits can be more power/area efficient. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Infrared and Visible Image Fusion Based on Res2Net-Transformer Automatic Encoding and Decoding.
- Author
-
Chunming Wu, Wukai Liu, and Xin Ma
- Subjects
IMAGE fusion ,INFRARED imaging ,FEATURE extraction ,TRANSFORMER models ,ENCODING - Abstract
A novel image fusion network framework with an autonomous encoder and decoder is suggested to increase the visual impression of fused images by improving the quality of infrared and visible light picture fusion. The network comprises an encoder module, fusion layer, decoder module, and edge improvementmodule. The encoder module utilizes an enhanced Inception module for shallow feature extraction, then combines Res2Net and Transformer to achieve deep-level co-extraction of local and global features from the original picture. An edge enhancement module (EEM) is created to extract significant edge features. A modal maximum difference fusion strategy is introduced to enhance the adaptive representation of information in various regions of the source image, thereby enhancing the contrast of the fused image. The encoder and the EEM module extract features, which are then combined in the fusion layer to create a fused picture using the decoder. Three datasets were chosen to test the algorithmproposed in this paper. The results of the experiments demonstrate that the network effectively preserves background and detail information in both infrared and visible images, yielding superior outcomes in subjective and objective evaluations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Performance Analysis of New 2D Spatial OCDMA Encoding based on HG Modes in Multicore Fiber.
- Author
-
Sahraoui, Walid, Amphawan, Angela, Jasser, Muhammed Basheer, and Tse-Kian Neo
- Subjects
CODE division multiple access ,CROSS correlation ,ENCODING ,VIDEO coding - Abstract
This paper presents a pioneering 2D spatial Optical Code-Division Multiple Access (OCDMA) encoding system that exploits Mode Division Multiplexing (MDM) and Multicore Fiber (MCF) technologies. This innovative approach utilizes two spatial dimensions to enhance the performance and security of OCDMA systems. In the first dimension, we employ Hermite-Gaussian modes (HG00, HG01, HG11) to modulate each user's signal individually. This unique approach offers a robust means of data transmission while ensuring minimal interference among users. The second-dimension leverages MCF encoding, introducing two incoherent OCDMA codes: the Zero Cross Correlation (ZCC) code (λc=0) and the ZFD code (λc=1). These codes are thoughtfully designed and simulated, taking into account their cross-correlation properties to guarantee minimal interference and heightened data security. To assess the efficiency of this novel OCDMA encoding system, we implemented simulations with three active users using the Opti system software. At the transmitter end, each user's signal is modulated individually by their designated HG mode (HG00, HG01, HG11), resulting in separate channels. Subsequently, at the multicore fiber, each user's data is encoded with a unique code-word, and they are directed through specific core groups, ensuring data isolation and integrity. In this paper, the BER and eye pattern are examined with respect to different parameters such as data rate and distance. At a distance of 5 km and data rate of 10 Gbit/s, a BER value around 10-70 is achieved. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
34. A Multi-Modal Entity Alignment Method with Inter-Modal Enhancement.
- Author
-
Yuan, Song, Lu, Zexin, Li, Qiyuan, and Gu, Jinguang
- Subjects
ENCODING - Abstract
Due to inter-modal effects hidden in multi-modalities and the impact of weak modalities on multi-modal entity alignment, a Multi-modal Entity Alignment Method with Inter-modal Enhancement (MEAIE) is proposed. This method introduces a unique modality called numerical modality in the modal aspect and applies a numerical feature encoder to encode it. In the feature embedding stage, this paper utilizes visual features to enhance entity relation representation and influence entity attribute weight distribution. Then, this paper introduces attention layers and contrastive learning to strengthen inter-modal effects and mitigate the impact of weak modalities. In order to evaluate the performance of the proposed method, experiments are conducted on three public datasets: FB15K, DB15K, and YG15K. By combining the datasets in pairs, compared with the current state-of-the-art multi-modal entity alignment models, the proposed model achieves a 2% and 3% improvement in Top-1 Hit Rate(Hit@1) and Mean Reciprocal Rank (MRR), demonstrating its feasibility and effectiveness. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
35. Hill Matrix and Radix-64 Bit Algorithm to Preserve Data Confidentiality.
- Author
-
Arshad, Ali, Nadeem, Muhammad, Riaz, Saman, Zahra, Syeda Wajiha, Dutta, Ashit Kumar, Alzaid, Zaid, Alabdan, Rana, Almutairi, Badr, and Almotairi, Sultan
- Subjects
DATA encryption ,DATA security ,DATA protection ,ALGORITHMS ,CONFIDENTIAL communications - Abstract
There are many cloud data security techniques and algorithms available that can be used to detect attacks on cloud data, but these techniques and algorithms cannot be used to protect data from an attacker. Cloud cryptography is the best way to transmit data in a secure and reliable format. Various researchers have developed various mechanisms to transfer data securely, which can convert data from readable to unreadable, but these algorithms are not sufficient to provide complete data security. Each algorithm has some data security issues. If some effective data protection techniques are used, the attacker will not be able to decipher the encrypted data, and even if the attacker tries to tamper with the data, the attacker will not have access to the original data. In this paper, various data security techniques are developed, which can be used to protect the data from attackers completely. First, a customized American Standard Code for Information Interchange (ASCII) table is developed. The value of each Index is defined in a customized ASCII table. When an attacker tries to decrypt the data, the attacker always tries to apply the predefined ASCII table on the Ciphertext, which in a way, can be helpful for the attacker to decrypt the data. After that, a radix 64-bit encryption mechanism is used, with the help of which the number of cipher data is doubled from the original data. When the number of cipher values is double the original data, the attacker tries to decrypt each value. Instead of getting the original data, the attacker gets such data that has no relation to the original data. After that, a Hill Matrix algorithm is created, with the help of which a key is generated that is used in the exact plain text for which it is created, and this Key cannot be used in any other plain text. The boundaries of each Hill text work up to that text. The techniques used in this paper are compared with those used in various papers and discussed that how far the current algorithm is better than all other algorithms. Then, the Kasiski test is used to verify the validity of the proposed algorithm and found that, if the proposed algorithm is used for data encryption, so an attacker cannot break the proposed algorithm security using any technique or algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
36. Nineteenth-century adaptations of concert music for domestic use as seen in contemporary periodicals: digital scholarship built on the foundations of IIIF, MEI and Linked Data
- Author
-
Lewis, David, Page, Kevin R, Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,and methods ,analysis ,Musicology ,annotation structures ,scholarly editing and editions development ,MEI ,encoding ,IIIF ,domestic music ,Linked Data ,systems ,Poster ,and analysis ,linked (open) data ,music and sound digitization - Abstract
We present a study of musical arrangements oif concert music for domestic performance through the lens of an English monthly music journal (The Harmonicon). The study is supported by digital annotation tooling built on IIIF, MEI and Linked Data.
- Published
- 2023
- Full Text
- View/download PDF
37. Distortion: Authority, Authenticity, and Agency in Zora Neale Hurston's Black Folk Recordings
- Author
-
Clement, Tanya, Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,Sound Studies ,Long Presentation ,Archives ,Media studies ,encoding ,digital libraries creation ,mixed-media analysis ,digital research infrastructures development and analysis ,American Studies ,and analysis ,management ,music and sound digitization ,African and African American Studies - Abstract
In Digital Humanities "Distant Listening" scholarship with sound, the expectation is that the data set will be clean and audible and that its metadata will be descriptive and informative.[1] In contrast to this notion, this talk demonstrates the importance of distortions in approximately seventy-five brief "tracks" or recorded songs, stories, and explanations from a folklore recording trip Zora Neale Hurston took in 1935 to Florida and Georgia with Alan Lomax and Mary Elizabeth Barnicle for the Library of Congress. Close listening to her 1935 recordings reveals that social and technical distortions are in line with how Hurston expresses the complexities of authority, authenticity, and subjectivity in her writings, which amplifying black epistemologies of self-making and creating resonant possibilities for imagining new transgressive formulations of cultural identity. This talk will consider how distortions play an important in large-scale digital projects with sound in the humanities.
- Published
- 2023
- Full Text
- View/download PDF
38. Multimodal genre recognition of Chinese operas with hybrid fusion
- Author
-
Fan, Tao, Wang, Hao, Hodel, Tobias, Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,Long Presentation ,semantic analysis ,genre recognition ,Library & information science ,Humanities computing ,and analysis ,encoding ,Digital humanities ,Chinese operas ,music and sound digitization - Abstract
In summary, we present our model MGRHF in the automatic recognition of Chinese operas, and conduct the empirical experiments. Experimental results demonstrate the efficiency and strong performance of MGRHF.
- Published
- 2023
- Full Text
- View/download PDF
39. Acoustical Cultural Heritages at the Centre of Cultural Exchanges. Origins and Distribution Patterns of Organ Building in South-East Europe
- Author
-
Ukolov, Dominik, Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,Musicology ,south-east europe ,Geography and geo-humanities ,and artefact preservation ,musical instruments ,cultural heritage ,encoding ,digitization (2D & 3D) ,Art history ,Central/Eastern European Studies ,cultural analytics ,Short Presentation ,data ,distribution ,and analysis ,object ,music and sound digitization - Abstract
This study explores cultural exchanges of organ building between Central and South-Eastern Europe using multimodal analyses and interactive visualizations. Findings reveal specific trends of organ characteristics and diverse exchanges across the regions and timespans. Future research aims to further examine the findings through audiovisual digitization, acoustical analyses and virtualization approaches.
- Published
- 2023
- Full Text
- View/download PDF
40. Hand in Hand: Strauss' Kaiser Walzer as a case study of interdisciplinary collaboration in digital musicology
- Author
-
VanderHart, Chanda, Nurmikko-Fuller, Terhi, Weigl, David M., Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,music history ,multimedia ,and methods ,analysis ,Musicology ,Media studies ,open access methods ,scholarly editing and editions development ,linked data ,encoding ,Art history ,digital musicology ,digital scholarly communication ,Short Presentation ,Humanities computing ,and analysis ,linked (open) data ,music and sound digitization - Abstract
Composed to mark Franz Joseph's state visit to Wilhelm II, Strauss' Kaiser Walzer (Emperor Waltz) provokes the question: which Emperor was it for? We investigate this in a case study of interdisciplinary collaboration in digital musicology, employing a FAIR data multimedia corpus associated with the Vienna Philharmonic's New Year's Concerts.
- Published
- 2023
- Full Text
- View/download PDF
41. AudiAnnotate Workshop with Radio Venceremos, Rebel Radio Station and SpokenWeb: Using IIIF with AV to Build Editions and Exhibits
- Author
-
Clement, Tanya, Bursztajn-Illingworth, Zoe, Wintermeier, Trent, Burrows, Vera, Scholger, Walter, Vogeler, Georg, Tasovac, Toma, Baillot, Anne, Raunig, Elisabeth, Scholger, Martina, Steiner, Elisabeth, Centre for Information Modelling, and Helling, Patrick
- Subjects
Paper ,Sound Studies ,and methods ,Archives ,analysis ,Chicano/a/x ,Library & information science ,Latino/a/x studies ,Media studies ,open access methods ,scholarly editing and editions development ,Cultural studies ,encoding ,Pre-Conference Workshop and Tutorial ,metadata standards ,systems ,Latin American Studies ,and analysis ,music and sound digitization - Abstract
This workshop will help participants make audio, video, and its interpretations more discoverable and usable through the use of the IIIF (International Image Interoperability Framework) standard for audio and video to produce, publish, and sustain W3C Web Annotations for individual and collaborative audiovisual editions, exhibits, and playlists. The workshop format will include introductions to IIIF, AudiAnnotate, Audacity, and GitHub using model, collaborative AudiAnnotate projects called the "SpokenWeb Anthology," made from recordings from the SpokenWeb Consortium and "Radio Venceremos, the Rebel's Radio Station," which features recordings from Radio Venceremos ("Radio We Will Overcome"), a popular, clandestine radio station that the Farabundo Martí National Liberation Front (FMLN) created to broadcast news and analysis from the mountains of Morazán, El Salvador during the eleven year Salvadoran Civil War (1981-1992).
- Published
- 2023
- Full Text
- View/download PDF
42. Pen and paper beats screen for retention.
- Subjects
HANDWRITING ,NEURAL development ,ENCODING - Abstract
The article discusses that handwriting on paper leads to greater brain activity and memory retention than content on a tablet which enhances the encoding of information in the hippocampus, precuneus, visual cortices, and other language-related frontal regions of the brain.
- Published
- 2021
43. Dictionary form in decoding, encoding and retention: Further insights.
- Author
-
DZIEMIANKO, ANNA
- Subjects
ELECTRONIC dictionaries ,FOREIGN language education ,PHONOLOGICAL encoding ,PHONOLOGICAL decoding ,ENCYCLOPEDIAS & dictionaries ,COLLOCATION (Linguistics) - Abstract
The aim of the paper is to investigate the role of dictionary form (paper versus electronic) in language reception, production and retention. The body of existing research does not give a clear answer as to which dictionary medium benefits users more. Divergent findings from many studies into the topic might stem from differences in research methodology (including the various tasks, participants and dictionaries used by different authors). Even a series of studies conducted by one researcher (Dziemianko, 2010, 2011, 2012b) leads to contradictory conclusions, possibly because of the use of paper and electronic versions of existing dictionaries, and the resulting problem with isolating dictionary form as a factor. To be able to argue with confidence that the results obtained follow from different dictionary formats, rather than presentation issues, research methodology should be improved. To successfully generalize about the significance of the medium for decoding, encoding and learning, the current study replicates previous research, but the presentation of lexicographic data on paper and on screen is now balanced, and the paper/electronic opposition is operationalized more appropriately. A real online dictionary and its paper-based counterpart composed of printouts of screen displays were used in the experiment in which the meaning of English nouns and phrases was explained, and collocations were completed with missing prepositions. A delayed post-test checked the retention of the meanings and collocations. The results indicate that dictionary medium does not play a statistically significant role in reception and production, but it considerably affects retention. [ABSTRACT FROM AUTHOR]
- Published
- 2017
- Full Text
- View/download PDF
44. Practical Dirty Paper Coding Schemes Using One Error Correction Code With Syndrome.
- Author
-
Kim, Taehyun, Kwon, Kyunghoon, and Heo, Jun
- Abstract
Dirty paper coding (DPC) offers an information-theoretic result for pre-cancellation of known interference at the transmitter. In this letter, we propose practical DPC schemes that use only one error correction code. Our designs focus on practical use from the viewpoint of complexity. For fair comparison with previous schemes, we compute the complexity of proposed schemes by the number of operations used. Simulation results show that compared to previous DPC schemes, the proposed schemes require lower transmission power to maintain the bit error rate to be within 10^-5 . [ABSTRACT FROM PUBLISHER]
- Published
- 2017
- Full Text
- View/download PDF
45. Generalized Compression Strategy for the Downlink Cloud Radio Access Network.
- Author
-
Patil, Pratik and Yu, Wei
- Subjects
RADIO access networks ,VIDEO coding ,IMAGE compression ,CELL phone systems - Abstract
This paper studies the downlink of a cloud radio access network (C-RAN) in which a centralized processor (CP) communicates with mobile users through base stations (BSs) that are connected to the CP via finite-capacity fronthaul links. Information theoretically, the downlink of a C-RAN is modeled as a two-hop broadcast-relay network. Among the various transmission and relaying strategies for such model, this paper focuses on the compression strategy, in which the CP centrally encodes the signals to be broadcast jointly by the BSs, then compresses and sends these signals to the BSs through the fronthaul links. We characterize an achievable rate region for a generalized compression strategy with Marton’s multicoding for broadcasting and multivariate compression for fronthaul transmission. We then compare this rate region with the distributed decode-forward (DDF) scheme, which achieves the capacity of the general relay networks to within a constant gap, and show that the difference lies in that DDF performs Marton’s multicoding and multivariate compression jointly as opposed to successively as in the compression strategy. A main result of this paper is that under the assumption that the fronthaul links are subject to a sum capacity constraint, this difference is immaterial; so, for the Gaussian network, the compression strategy based on successive encoding can already achieve the capacity region of the C-RAN to within a constant gap, where the gap is independent of the channel parameters and the power constraints at the BSs. As a further result, for C-RAN under individual fronthaul constraints, this paper also establishes that the compression strategy can achieve to within a constant gap to the sum capacity. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
46. The Relationship between Notetaking, Revision, and Learning in Tertiary Education: A Review of Literature, 1970 - 2023.
- Author
-
Carroll, Kathleen
- Subjects
LITERATURE reviews ,NOTETAKING ,EDUCATIONAL literature ,POSTSECONDARY education ,COGNITIVE ability ,READING comprehension - Abstract
The aim of this paper is to highlight the complexity and the central importance to academic achievement of taking and reviewing notes at third level. It is based on a review of international literature on the notetaking process between 1970 and 2023. The paper describes notetaking and reviewing as the method of encoding and externally storing new material, for the purpose of advancement in learning and attainment in assessment. It outlines research on the benefits of typed versus handwritten methods of notetaking. The overriding outcome demonstrates that taking notes, either by longhand or typing, produces superior results than not taking and reviewing notes. The remainder of the review focuses on the status of notetaking instruction in third level colleges and universities. It is observed that despite the centrality of notetaking to educational success, and the positive impact of instruction on taking notes, skills training and modelling are generally not taught or embedded in the curricula in tertiary education. Furthermore, the paper describes teaching strategies alongside linear and non-linear notetaking methods that have been shown to encourage students to take and revise notes which has, in turn, led to the enhancement of learning. The conclusion reviews the main points of the article and its limitations. A further review of literature on the examination of cognitive and metacognitive functions on notetaking would contribute to the understanding of how notetaking and revision operate to increase students' capacity for recall, comprehension, and knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
47. Data encoding for healthcare data democratization and information leakage prevention.
- Author
-
Thakur, Anshul, Zhu, Tingting, Abrol, Vinayak, Armstrong, Jacob, Wang, Yujiang, and Clifton, David A.
- Subjects
DEEP learning ,DEMOCRATIZATION ,ENCODING ,LEAKAGE ,MEDICAL care - Abstract
The lack of data democratization and information leakage from trained models hinder the development and acceptance of robust deep learning-based healthcare solutions. This paper argues that irreversible data encoding can provide an effective solution to achieve data democratization without violating the privacy constraints imposed on healthcare data and clinical models. An ideal encoding framework transforms the data into a new space where it is imperceptible to a manual or computational inspection. However, encoded data should preserve the semantics of the original data such that deep learning models can be trained effectively. This paper hypothesizes the characteristics of the desired encoding framework and then exploits random projections and random quantum encoding to realize this framework for dense and longitudinal or time-series data. Experimental evaluation highlights that models trained on encoded time-series data effectively uphold the information bottleneck principle and hence, exhibit lesser information leakage from trained models. Healthcare data democratization is often hampered by privacy constraints governing the sensitive healthcare data. Here, the authors show that encoding healthcare data could be a potential solution for achieving healthcare democratization within the context of deep learning. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. Self-Bilinear Map from One Way Encoding System and i.
- Author
-
Zhang, Huang, Huang, Ting, Zhang, Fangguo, Wei, Baodian, and Du, Yusong
- Subjects
CYCLIC groups ,CONCRETE construction ,KEY agreement protocols (Computer network protocols) ,ENCODING - Abstract
A bilinear map whose domain and target sets are identical is called a self-bilinear map. Original self-bilinear maps are defined over cyclic groups. Since the map itself reveals information about the underlying cyclic group, the Decisional Diffie–Hellman Problem (DDH) and the computational Diffie–Hellman (CDH) problem may be solved easily in some specific groups. This brings a lot of limitations to constructing secure self-bilinear schemes. As a compromise, a self-bilinear map with auxiliary information was proposed in CRYPTO'2014. In this paper, we construct this weak variant of a self-bilinear map from generic sets and indistinguishable obfuscation. These sets should own several properties. A new notion, One Way Encoding System (OWES), is proposed to summarize these properties. The new Encoding Division Problem (EDP) is defined to complete the security proof. The OWES can be built by making use of one level of graded encoding systems (GES). To construct a concrete self-bilinear map scheme, Garg, Gentry, and Halvei(GGH13) GES is adopted in our work. Even though the security of GGH13 was recently broken by Hu et al., their algorithm does not threaten our applications. At the end of this paper, some further considerations for the EDP for concrete construction are given to improve the confidence that EDP is indeed hard. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. Asymmetric solid burst correcting integer codes.
- Author
-
Das, Pankaj Kumar and Pokhrel, Nabin Kumar
- Subjects
PROBABILITY theory ,MEMORY ,NOISE ,ENCODING ,MOTIVATION (Psychology) - Abstract
With the development of technology, communication channels are increasingly experiencing burst faults of various forms caused by noise elements. To get around this, an appropriate encoding and decoding mechanism should be designed while taking into account things like redundancy, memory usage, efficiency, etc. Motivated by these facts, in this paper, we present a class of integer codes capable of correct asymmetric solid burst errors. In addition to the theoretical foundations, the paper also derives the expressions for the probabilities of incorrect and correct decoding for the proposed codes. Lastly, we compare the proposed codes with other similar codes in terms of code rate and memory consumption. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Construction and Application of a Data-Driven Abstract Extraction Model for English Text.
- Author
-
Peng, Hui
- Subjects
ALGORITHMS ,MEMORY ,PROTOTYPES ,PERCENTILES ,ENCODING - Abstract
In this paper, a single English text is taken as the research object, and the automatic extraction method of text summary is studied using data-driven method. This paper takes a single text as the research object, establishes the connection relationship between article sentences, and proposes a method of automatic extraction of text summary based on graph model and topic model. The method combines the text graph model, complex network theory, and LDA topic model to construct a sentence synthesis scoring function to calculate the text single-sentence weights and output the sentences within the text threshold in descending order as text summaries. The algorithm improves the readability of the text summary while providing enough information for the text summary. In this paper, we propose a BERT-based topic-aware text summarization model based on a neural topic model. The approach uses the potential topic embedding representation encoded by the neural topic model to match with the embedding representation of BERT to guide topic generation to meet the requirements of semantic representation of text and explores topic inference and summary generation jointly in an end-to-end manner through the transformer architecture to capture semantic features while modelling long-range dependencies by a self-attentive mechanism. In this paper, we propose improvements based on pretrained models on both extractive and generative algorithms, making them enhanced for global information memory. Combining the advantages of both algorithms, a new joint model is proposed, which makes it possible to generate summaries that are more consistent with the original topic and have a reduced repetition rate for evenly distributed article information. Comparative experiments were conducted on several datasets and small uniformly distributed private datasets were constructed. In several comparative experiments, the evaluation metrics were improved by up to 2.5 percentage points, proving the effectiveness of the method, and a prototype system for an automatic abstract generation was built to demonstrate the results. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.