7,820 results on '"Kullback–Leibler divergence"'
Search Results
2. A Measure of Departure from Marginal Homogeneity using Continuation Odds for Square Contingency Tables with Ordered Categories.
- Author
-
Ando, Shuji, Fujimoto, Kei, and Tomizawa, Sadao
- Abstract
For the analysis of square contingency tables with ordinal classifications, the marginal homogeneity (MH) model is the one of important models. Some measures for analyzing the degree of departure from the MH model have been proposed. This study proposes a new measure using the continuation odds. Continuation odds may be considered as discrete time hazard. The proposed measure is expressed in the form of Cressie-Read's power-divergence (including the Kullback-Leibler divergence) or Patil and Taillie's diversity index (including Shannon entropy). This study derives a plug-in estimator of the proposed measure and an approximate confidence interval for the proposed measure. Through numerical examples, we evaluate the performances of them. Additionally. the usefulness of the proposed measure is demonstrated by applying it to real data that the row and column variables are the discrete survival time variables. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Evaluating model fit for type II censored data: a Bayesian non-parametric approach based on the Kullback-Leibler divergence estimation.
- Author
-
Al-Labadi, Luai, Fazeli-Asl, Forough, and Ly, Anna
- Subjects
- *
CENSORING (Statistics) , *STATISTICAL models , *STATISTICS , *SIMULATION methods & models , *ALGORITHMS - Abstract
AbstractModel checking evaluates the appropriateness of a statistical model based on the observed data, and it is essential to make valid statistical analyses. In this paper, a new procedure for model checking type II censored data is proposed. This procedure combines the Kullback-Leibler divergence, the Dirichlet process, and the relative belief ratio. The method is implemented
via a computational algorithm and is explained through several examples. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
4. Local inconsistency detection using the Kullback–Leibler divergence measure.
- Author
-
Spineli, Loukia M.
- Subjects
- *
LEGAL evidence , *STANDARD deviations , *DENSITY - Abstract
Background: The standard approach to local inconsistency assessment typically relies on testing the conflict between the direct and indirect evidence in selected treatment comparisons. However, statistical tests for inconsistency have low power and are subject to misinterpreting a p-value above the significance threshold as evidence of consistency. Methods: We propose a simple framework to interpret local inconsistency based on the average Kullback–Leibler divergence (KLD) from approximating the direct with the corresponding indirect estimate and vice versa. Our framework uses directly the mean and standard error (or posterior mean and standard deviation) of the direct and indirect estimates obtained from a local inconsistency method to calculate the average KLD measure for selected comparisons. The average KLD values are compared with a semi-objective threshold to judge the inconsistency as acceptably low or material. We exemplify our novel interpretation approach using three networks with multiple treatments and multi-arm studies. Results: Almost all selected comparisons in the networks were not associated with statistically significant inconsistency at a significance level of 5%. The proposed interpretation framework indicated 14%, 66%, and 75% of the selected comparisons with an acceptably low inconsistency in the corresponding networks. Overall, information loss was more notable when approximating the posterior density of the indirect estimates with that of the direct estimates, attributed to indirect estimates being more imprecise. Conclusions: Using the concept of information loss between two distributions alongside a semi-objectively defined threshold helped distinguish target comparisons with acceptably low inconsistency from those with material inconsistency when statistical tests for inconsistency were inconclusive. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. On Bayesian Hotelling's T2 test for the mean.
- Author
-
Al-Labadi, Luai, Fazeli Asl, Forough, and Lim, Kyuson
- Subjects
- *
GAUSSIAN distribution , *STATISTICAL sampling , *A priori , *HYPOTHESIS - Abstract
The multivariate one-sample problem considers an independent random sample from a multivariate normal distribution with mean μ and unknown variance Σ. For a given real vector μ 1 , the interest is to assess the hypothesis H 0 : μ = μ 1. This paper proposes a new Bayesian approach to this problem based on comparing the change in the Kullback-Leibler divergence from a priori to a posteriori via the relative belief ratio. Eliciting the prior is also considered. The use of the approach is illustrated through several examples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Spatial randomness-based anomaly detection approach for monitoring local variations in multimode surface topography.
- Author
-
Baek, Jaeseung, Jeong, Myong K., and Elsayed, Elsayed A.
- Subjects
- *
SURFACE topography , *ORDER statistics , *SURFACE properties , *SURFACE analysis , *MANUFACTURING processes - Abstract
Anomaly detection of three-dimensional (3D) topographic data is a challenging problem in spatial data analysis. In this paper, we investigate spatial patterns of 3D surface data that exhibit multiple in-control modes. In complex manufacturing processes, surfaces of final products could contain different topographic features from one in-control surface to another, thus making it difficult to monitor the surface with existing approaches, which rely on the assumption of the presence of single mode surface topography. We propose a novel anomaly detection approach for monitoring local topographic variations in the presence of multimode surface topography. We present a binarization model to capture the generic behavior of the multimode surfaces and enhance the representation of the surface. To systematically monitor the surface, we introduce a new probabilistic distance measure (PDM) that quantifies the similarity of spatial patterns between two binarized surfaces. The proposed PDM takes advantage of identifying local variations by utilizing the order neighbor statistics, which captures the local property on the surface. Experimental results with numerical simulation data and real-life paper surface data are provided to demonstrate the effectiveness of the proposed approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. CLLT 'versus' Corpora and IJCL: a (half serious) keyness analysis.
- Author
-
Wulff, Stefanie and Gries, Stefan Th.
- Subjects
CORPORA ,RESEARCH personnel ,ANNIVERSARIES ,TEAMS - Abstract
In this introduction to the special issue celebrating CLLT's 20th anniversary, we look back and forward in time. To look back, we present the results of a (tongue-in-cheek) corpus-linguistic analysis of about 10 years worth of data of research published in CLLT, IJCL, and Corpora in order to distill the "essence" of CLLT for the reader. As an added bonus, we use the opportunity to discuss ways to improve established ways of performing keyness analyses. To look forward, we asked six (teams of) researchers who all have shaped corpus linguistics and thus the journal to give us their take on what the most significant developments in the field have been, and where they see the most impactful opportunities and challenges arise. This introduction briefly summarizes their contributions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Feature Vector Effectiveness Evaluation for Pattern Selection in Computational Lithography.
- Author
-
Feng, Yaobin, Liu, Jiamin, Jiang, Hao, and Liu, Shiyuan
- Subjects
FAST Fourier transforms ,LITHOGRAPHY ,KEY performance indicators (Management) ,CALIBRATION - Abstract
Pattern selection is crucial for optimizing the calibration process of optical proximity correction (OPC) models in computational lithography. However, it remains a challenge to achieve a balance between representative coverage and computational efficiency. This work presents a comprehensive evaluation of the feature vectors' (FVs') effectiveness in pattern selection for OPC model calibration, leveraging key performance indicators (KPIs) based on Kullback–Leibler divergence and distance ranking. Through the construction of autoencoder-based FVs and fast Fourier transform (FFT)-based FVs, we compare their efficacy in capturing critical pattern features. Validation experimental results indicate that autoencoder-based FVs, particularly augmented with the lithography domain knowledge, outperform FFT-based counterparts in identifying anomalies and enhancing lithography model performance. These results also underscore the importance of adaptive pattern representation methods in calibrating the OPC model with evolving complexities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Label distribution similarity-based noise correction for crowdsourcing.
- Author
-
Ren, Lijuan, Jiang, Liangxiao, Zhang, Wenjun, and Li, Chaoqun
- Abstract
In crowdsourcing scenarios, we can obtain each instance’s multiple noisy labels from different crowd workers and then infer its integrated label via label aggregation. In spite of the effectiveness of label aggregation methods, there still remains a certain level of noise in the integrated labels. Thus, some noise correction methods have been proposed to reduce the impact of noise in recent years. However, to the best of our knowledge, existing methods rarely consider an instance’s information from both its features and multiple noisy labels simultaneously when identifying a noise instance. In this study, we argue that the more distinguishable an instance’s features but the noisier its multiple noisy labels, the more likely it is a noise instance. Based on this premise, we propose a label distribution similarity-based noise correction (LDSNC) method. To measure whether an instance’s features are distinguishable, we obtain each instance’s predicted label distribution by building multiple classifiers using instances’ features and their integrated labels. To measure whether an instance’s multiple noisy labels are noisy, we obtain each instance’s multiple noisy label distribution using its multiple noisy labels. Then, we use the Kullback-Leibler (KL) divergence to calculate the similarity between the predicted label distribution and multiple noisy label distribution and define the instance with the lower similarity as a noise instance. The extensive experimental results on 34 simulated and four real-world crowdsourced datasets validate the effectiveness of our method. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Taming numerical imprecision by adapting the KL divergence to negative probabilities.
- Author
-
Pfahler, Simon, Georg, Peter, Schill, Rudolf, Klever, Maren, Grasedyck, Lars, Spang, Rainer, and Wettig, Tilo
- Abstract
The Kullback–Leibler (KL) divergence is frequently used in data science. For discrete distributions on large state spaces, approximations of probability vectors may result in a few small negative entries, rendering the KL divergence undefined. We address this problem by introducing a parameterized family of substitute divergence measures, the shifted KL (sKL) divergence measures. Our approach is generic and does not increase the computational overhead. We show that the sKL divergence shares important theoretical properties with the KL divergence and discuss how its shift parameters should be chosen. If Gaussian noise is added to a probability vector, we prove that the average sKL divergence converges to the KL divergence for small enough noise. We also show that our method solves the problem of negative entries in an application from computational oncology, the optimization of Mutual Hazard Networks for cancer progression using tensor-train approximations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Fisher-Based Inaccuracy Information Measure.
- Author
-
Kharazmi, Omid, Contreras-Reyes, Javier E., and Balakrishnan, Narayanaswamy
- Subjects
- *
FISHER information , *INFORMATION measurement , *TIME series analysis , *EQUILIBRIUM , *DENSITY - Abstract
We introduce a new inaccuracy measure in terms of Fisher information. The proposed information measure is referred to as Fisher-based inaccuracy information (FBII) measure. Next, we examine some properties of this information measure and specifically examine it for escort and equilibrium distributions. Further, we propose Bayes–Fisher-based inaccuracy information (BFBII) measure and examine its connection to Kullback–Leibler and chi-square divergence measures. Moreover, in three different optimization problems, we show that the harmonic-mixture distribution gives optimal information based on BFBII measure. Some examples of FBII measure and escort density related to skew-normal and Student-
t distributions are also illustrated, and then are applied to fish condition factor time series. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
12. Attribute Value Weighted Averaged One-Dependence Estimators with Kullback–Leibler Divergence.
- Author
-
Zhu, Changjian, Chen, Shenglei, Ke, Huihang, and Zhang, Chengzhen
- Subjects
- *
DATABASES , *ALGORITHMS , *CLASSIFICATION , *PROBABILITY theory - Abstract
The Averaged One-Dependence Estimators (AODE) algorithm is an improvement of the naive Bayes algorithm, which allows all the attributes dependent on one common attribute, called parent attribute, thus forming One-Dependence Estimators (ODE). The classification probability is estimated by averaging the conditional probability of the ODE. When there is a dependency relationship between attributes, the AODE algorithm can better capture these relationships, thus improving classification performance. The AODE algorithm treats the parent and child attribute values equally in different ODEs. However, the parent attribute value and the child attribute value in different ODEs have different importance for classification. In this paper, two attribute value weighted AODE based on Kullback–Leibler divergence are proposed, one is parent attribute value weighted AODE, and the other is child attribute value weighted AODE. Comparative experiments were carried out with 30 datasets in the UCI database, and the experiments indicate that the performance of the parent attribute value weighted AODE algorithm with Kullback–Leibler divergence is significantly better than the original AODE algorithm, and its performance is also better than the mutual information weighted AODE algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. A Maximum Value for the Kullback–Leibler Divergence between Quantized Distributions.
- Author
-
Bonnici, Vincenzo
- Subjects
- *
PROBABILITY measures , *DISTRIBUTION (Probability theory) , *PROBABILITY theory - Abstract
The Kullback–Leibler (KL) divergence is a widely used measure for comparing probability distributions, but it faces limitations such as its unbounded nature and the lack of comparability between distributions with different quantum values (the discrete unit of probability). This study addresses these challenges by introducing the concept of quantized distributions, which are probability distributions formed by distributing a given discrete quantity or quantum. This study establishes an upper bound for the KL divergence between two quantized distributions, enabling the development of a normalized KL divergence that ranges between 0 and 1. The theoretical findings are supported by empirical evaluations, demonstrating the distinct behavior of the normalized KL divergence compared to other commonly used measures. The results highlight the importance of considering the quantum value when applying the KL divergence, offering insights for future advancements in divergence measures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Optimum Achievable Rates in Two Random Number Generation Problems with f -Divergences Using Smooth Rényi Entropy †.
- Author
-
Nomura, Ryo and Yagi, Hideki
- Subjects
- *
RENYI'S entropy , *INFORMATION theory , *CONVEX functions , *DISTRIBUTION (Probability theory) - Abstract
Two typical fixed-length random number generation problems in information theory are considered for general sources. One is the source resolvability problem and the other is the intrinsic randomness problem. In each of these problems, the optimum achievable rate with respect to the given approximation measure is one of our main concerns and has been characterized using two different information quantities: the information spectrum and the smooth Rényi entropy. Recently, optimum achievable rates with respect to f-divergences have been characterized using the information spectrum quantity. The f-divergence is a general non-negative measure between two probability distributions on the basis of a convex function f. The class of f-divergences includes several important measures such as the variational distance, the KL divergence, the Hellinger distance and so on. Hence, it is meaningful to consider the random number generation problems with respect to f-divergences. However, optimum achievable rates with respect to f-divergences using the smooth Rényi entropy have not been clarified yet in both problems. In this paper, we try to analyze the optimum achievable rates using the smooth Rényi entropy and to extend the class of f-divergence. To do so, we first derive general formulas of the first-order optimum achievable rates with respect to f-divergences in both problems under the same conditions as imposed by previous studies. Next, we relax the conditions on f-divergence and generalize the obtained general formulas. Then, we particularize our general formulas to several specified functions f. As a result, we reveal that it is easy to derive optimum achievable rates for several important measures from our general formulas. Furthermore, a kind of duality between the resolvability and the intrinsic randomness is revealed in terms of the smooth Rényi entropy. Second-order optimum achievable rates and optimistic achievable rates are also investigated. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. On the optimality of score-driven models.
- Author
-
Gorgi, P, Lauria, C S A, and Luati, A
- Subjects
- *
BIOMETRY , *DENSITY , *GENERALIZATION , *DEFINITIONS - Abstract
Score-driven models have recently been introduced as a general framework to specify time-varying parameters of conditional densities. The score enjoys stochastic properties that make these models easy to implement and convenient to apply in several contexts, ranging from biostatistics to finance. Score-driven parameter updates have been shown to be optimal in terms of locally reducing a local version of the Kullback–Leibler divergence between the true conditional density and the postulated density of the model. A key limitation of such an optimality property is that it holds only locally both in the parameter space and sample space, yielding to a definition of local Kullback–Leibler divergence that is in fact not a divergence measure. The current paper shows that score-driven updates satisfy stronger optimality properties that are based on a global definition of Kullback–Leibler divergence. In particular, it is shown that score-driven updates reduce the distance between the expected updated parameter and the pseudo-true parameter. Furthermore, depending on the conditional density and the scaling of the score, the optimality result can hold globally over the parameter space, which can be viewed as a generalization of the monotonicity property of the stochastic gradient descent scheme. Several examples illustrate how the results derived in the paper apply to specific models under different easy-to-check assumptions, and provide a formal method to select the link function and the scaling of the score. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Cumulative α-Jensen–Shannon measure of divergence: Properties and applications.
- Author
-
Riyahi, H., Baratnia, M., and Doostparast, M.
- Subjects
- *
INFORMATION theory , *DISTRIBUTION (Probability theory) , *TELECOMMUNICATION systems , *DATA mining , *MACHINE learning , *CUMULATIVE distribution function - Abstract
The problem of quantifying the distance between distributions arises in various fields, including cryptography, information theory, communication networks, machine learning, and data mining. In this article, the analogy with the cumulative Jensen–Shannon divergence, defined in Nguyen and Vreeken (2015), we propose a new divergence measure based on the cumulative distribution function and call it the cumulative α-Jensen–Shannon divergence, denoted by CJS (α) . Properties of CJS (α) are studied in detail, and also two upper bounds for CJS (α) are obtained. The simplified results under the proportional reversed hazard rate model are given. Various illustrative examples are analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. Distributionally Robust Newsvendor Under Stochastic Dominance with a Feature-Based Application.
- Author
-
Fu, Mingyang, Li, Xiaobo, and Zhang, Lianmin
- Subjects
STOCHASTIC dominance ,ROBUST optimization ,AMBIGUITY ,INVENTORIES ,GRANTS (Money) - Abstract
Problem definition: In this paper, we study the newsvendor problem under some distributional ambiguity sets and explore their relations. Additionally, we explore the benefits of implementing this robust solution in the feature-based newsvendor problem. Methodology and results: We propose a new type of discrepancy-based ambiguity set, the JW ambiguity set, and analyze it within the framework of first-order stochastic dominance. We show that the distributionally robust optimization (DRO) problem with this ambiguity set admits a closed-form solution for the newsvendor loss. This result also implies that the newsvendor problem under the well-known infinity-Wasserstein ambiguity set and Lévy ball ambiguity set admit closed-form inventory levels as a by-product. In the application of feature-based newsvendor, we adopt general kernel methods to estimate the conditional demand distribution and apply our proposed DRO solutions to account for the estimation error. Managerial implications: The closed-form solutions enable an efficient computation of optimal inventory levels. In addition, we explore the property of optimal robust inventory levels with respect to the nonrobust version via concepts of perceived critical ratio and mean repulsion. The results of numerical experiments and the case study indicate that the proposed model outperforms other state-of-the-art approaches, particularly in environments where demand is influenced by covariates and difficult to estimate. Funding: X. Li is supported by the Singapore Ministry of Education [Tier 1 Grant 23-0619-P0001, 24-0500-A0001] and National Natural Science Foundation of China [Grant 72331004]. L. Zhang is partially supported by the National Natural Science Foundation of China [Grants 72171156 and 72231002] and the Hong Kong Research Grants Council [Grant 16212419]. Supplemental Material: The online appendix is available at https://doi.org/10.1287/msom.2023.0159. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Local inconsistency detection using the Kullback–Leibler divergence measure
- Author
-
Loukia M. Spineli
- Subjects
Network meta-analysis ,Consistency ,Kullback–Leibler divergence ,Information loss ,Medicine - Abstract
Abstract Background The standard approach to local inconsistency assessment typically relies on testing the conflict between the direct and indirect evidence in selected treatment comparisons. However, statistical tests for inconsistency have low power and are subject to misinterpreting a p-value above the significance threshold as evidence of consistency. Methods We propose a simple framework to interpret local inconsistency based on the average Kullback–Leibler divergence (KLD) from approximating the direct with the corresponding indirect estimate and vice versa. Our framework uses directly the mean and standard error (or posterior mean and standard deviation) of the direct and indirect estimates obtained from a local inconsistency method to calculate the average KLD measure for selected comparisons. The average KLD values are compared with a semi-objective threshold to judge the inconsistency as acceptably low or material. We exemplify our novel interpretation approach using three networks with multiple treatments and multi-arm studies. Results Almost all selected comparisons in the networks were not associated with statistically significant inconsistency at a significance level of 5%. The proposed interpretation framework indicated 14%, 66%, and 75% of the selected comparisons with an acceptably low inconsistency in the corresponding networks. Overall, information loss was more notable when approximating the posterior density of the indirect estimates with that of the direct estimates, attributed to indirect estimates being more imprecise. Conclusions Using the concept of information loss between two distributions alongside a semi-objectively defined threshold helped distinguish target comparisons with acceptably low inconsistency from those with material inconsistency when statistical tests for inconsistency were inconclusive.
- Published
- 2024
- Full Text
- View/download PDF
19. Kullback–Leibler divergence based multidimensional robust universal hypothesis testing.
- Author
-
Bahçeci, Ufuk
- Abstract
In ball-type robust universal hypothesis testing (UHT), the null hypothesis is a set of probability distributions constrained by a ball of radius r > 0 denoted B (P 0 , r) based on the cumulative density function of the nominal distribution P 0 . A major limitation is that this method is originally designed only for one-dimensional distributions. To overcome this limitation, this paper proposes a new method to deal with multidimensional samples. For this purpose, first of all, new bounds are defined in the multidimensional domain. Later, a new mathematical programming model based on the transformed region of B (P 0 , r) , namely empirical multidimensional robust UHT problem based on Kullback–Leibler divergence is proposed for ball-type robust UHT. The power of the new testing method combined with different types of bounds was then demonstrated by a computational study. This method fills the research gap by enabling ball-type robust UHT for multidimensional samples and is flexible in that it can be used with different type of bounds. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. Computing marginal and conditional divergences between decomposable models with applications in quantum computing and earth observation.
- Author
-
Lee, Loong Kuan, Webb, Geoffrey I., Schmidt, Daniel F., and Piatkowski, Nico
- Subjects
MARGINAL distributions ,QUANTUM computing - Abstract
The ability to compute the exact divergence between two high-dimensional distributions is useful in many applications, but doing so naively is intractable. Computing the α β -divergence—a family of divergences that includes the Kullback–Leibler divergence and Hellinger distance—between the joint distribution of two decomposable models, i.e., chordal Markov networks, can be done in time exponential in the treewidth of these models. Extending this result, we propose an approach to compute the exact α β -divergence between any marginal or conditional distribution of two decomposable models. In order to do so tractably, we provide a decomposition over the marginal and conditional distributions of decomposable models. We then show how our method can be used to analyze distributional changes by first applying it to the benchmark image dataset QMNIST and a dataset containing observations from various areas at the Roosevelt Nation Forest and their cover type. Finally, based on our framework, we propose a novel way to quantify the error in contemporary superconducting quantum computers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. A deep clustering framework integrating pairwise constraints and a VMF mixture model
- Author
-
He Ma and Weipeng Wu
- Subjects
generative deep clustering ,variational autoencoder ,von mises-fisher mixture model ,pairwise constraints ,kullback–leibler divergence ,Mathematics ,QA1-939 ,Applied mathematics. Quantitative methods ,T57-57.97 - Abstract
We presented a novel deep generative clustering model called Variational Deep Embedding based on Pairwise constraints and the Von Mises-Fisher mixture model (VDEPV). VDEPV consists of fully connected neural networks capable of learning latent representations from raw data and accurately predicting cluster assignments. Under the assumption of a genuinely non-informative prior, VDEPV adopted a von Mises-Fisher mixture model to depict the hyperspherical interpretation of the data. We defined and established pairwise constraints by employing a random sample mining strategy and applying data augmentation techniques. These constraints enhanced the compactness of intra-cluster samples in the spherical embedding space while improving inter-cluster samples' separability. By minimizing Kullback-Leibler divergence, we formulated a clustering loss function based on pairwise constraints, which regularized the joint probability distribution of latent variables and cluster labels. Comparative experiments with other deep clustering methods demonstrated the excellent performance of VDEPV.
- Published
- 2024
- Full Text
- View/download PDF
22. Finding influential nodes in complex networks based on Kullback–Leibler model within the neighborhood
- Author
-
Guan Wang, Zejun Sun, Tianqin Wang, Yuanzhe Li, and Haifeng Hu
- Subjects
Complex networks ,Information dissemination ,Influential nodes ,Kullback–Leibler divergence ,Neighborhood ,Medicine ,Science - Abstract
Abstract As a research hot topic in the field of network security, the implementation of machine learning, such as federated learning, involves information interactions among a large number of distributed network devices. If we regard these distributed network devices and connection relationships as a complex network, we can identify the influential nodes to find the crucial points for optimizing the imbalance of the reliability of devices in federated learning system. This paper will analyze the advantages and disadvantages of existing algorithms for identifying influential nodes in complex networks, and propose a method from the perspective of information dissemination for finding influential nodes based on Kullback–Leibler divergence model within the neighborhood (KLN). Firstly, the KLN algorithm removes a node to simulate the scenario of node failure in the information dissemination process. Secondly, KLN evaluates the loss of information entropy within the neighborhood after node removal by establishing the KL divergence model. Finally, it assesses the damage influence of the removed node by integrating the network attributes and KL divergence model, thus achieving the evaluation of node importance. To validate the performance of KLN, this paper conducts an analysis and comparison of its results with those of 11 other algorithms on 10 networks, using SIR model as a reference. Additionally, a case study was undertaken on a real epidemic propagation network, leading to the proposal of management and control strategies for daily protection based on the influential nodes. The experimental results indicate that KLN effectively evaluates the importance of the removed node using KL model within the neighborhood, and demonstrate better accuracy and applicability across networks of different scales.
- Published
- 2024
- Full Text
- View/download PDF
23. Computer-Aided Diagnosis of Diabetic Retinopathy Lesions Based on Knowledge Distillation in Fundus Images.
- Author
-
Moya-Albor, Ernesto, Lopez-Figueroa, Alberto, Jacome-Herrera, Sebastian, Renza, Diego, and Brieva, Jorge
- Subjects
- *
COMPUTER-aided diagnosis , *DIABETIC retinopathy , *CONVOLUTIONAL neural networks , *HEALTH literacy , *DIABETES complications - Abstract
At present, the early diagnosis of diabetic retinopathy (DR), a possible complication of diabetes due to elevated glucose concentrations in the blood, is usually performed by specialists using a manual inspection of high-resolution fundus images based on lesion screening, leading to problems such as high work-intensity and accessibility only in specialized health centers. To support the diagnosis of DR, we propose a deep learning-based (DL) DR lesion classification method through a knowledge distillation (KD) strategy. First, we use the pre-trained DL architecture, Inception-v3, as a teacher model to distill the dataset. Then, a student model, also using the Inception-v3 model, is trained on the distilled dataset to match the performance of the teacher model. In addition, a new combination of Kullback–Leibler (KL) divergence and categorical cross-entropy (CCE) loss is used to measure the difference between the teacher and student models. This combined metric encourages the student model to mimic the predictions of the teacher model. Finally, the trained student model is evaluated on a validation dataset to assess its performance and compare it with both the teacher model and another competitive DL model. Experiments are conducted on the two datasets, corresponding to an imbalanced and a balanced dataset. Two baseline models (Inception-v3 and YOLOv8) are evaluated for reference, obtaining a maximum training accuracy of 66.75% and 90.90%, respectively, and a maximum validation accuracy of 35.94% and 81.52%, both for the imbalanced dataset. On the other hand, the proposed DR classification model achieves an average training accuracy of 99.01% and an average validation accuracy of 97.30%, overcoming the baseline models and other state-of-the-art works. Experimental results show that the proposed model achieves competitive results in DR lesion detection and classification tasks, assisting in the early diagnosis of diabetic retinopathy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. A New and Effective Classification Method for Complex Time Series Based on Information Measure.
- Author
-
Li, Ang and Shang, Pengjian
- Subjects
- *
TIME series analysis , *MULTIDIMENSIONAL scaling , *INFORMATION measurement , *CLASSIFICATION , *EXHIBITIONS - Abstract
The growing importance of time series information measure raises questions about how to effectively cluster a large number of nonlinear complex time series data and accurately extract more hidden information from them. In this paper, a clustering measurement and classification method for complex time series, the symmetrical exponential Tsallis relative information (SETRI) measure, is proposed, which aims to address these problems. The intrinsic characteristics of different types of time series information could be validly identified by this method. The modified multidimensional scaling (MDS) method, based on the SETRI measure, has the ability to display the data in the form of graphs for intuitive exhibition and accomplish the process of dimension reduction. The introduction of weighted permutation patterns allows a higher-accuracy classification not only for the time series dissimilarity quantification, but also for avoiding dispensable errors. Besides, the feasibility of the modified MDS classification method is visually and quantitatively verified by the simulated and real-world data. Compared with other MDS methods, the proposed method has better performance, which is reflected in the validity and rationality of the clustering results, thus further verifying the feasibility of the proposed method. Therefore, the new results will be helpful to develop complex data clustering and dimensionality reduction methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Dispersion indices based on Kerridge inaccuracy measure and Kullback-Leibler divergence.
- Author
-
Balakrishnan, Narayanaswamy, Buono, Francesco, Calì, Camilla, and Longobardi, Maria
- Subjects
- *
GENERATING functions , *INFINITE series (Mathematics) , *DISPERSION (Chemistry) , *INFORMATION measurement - Abstract
Recently, a new dispersion index, as a measures of information, has been introduced and called varentropy. In this article, we introduce new measures of variability based on two measures of uncertainty, namely, the Kerridge inaccuracy measure and the Kullback-Leibler divergence. Their generating functions are considered and their infinite series representations are given. These new measures and associated properties, bounds and illustrative examples are all presented in detail. Finally, an application of Kullback-Leibler divergence and its dispersion index is illustrated by using the mean-variance rule. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Adaptive coalition with Kullback‐Leibler divergence for cooperative spectrum sensing (KLDCSS) in cognitive radio networks.
- Author
-
Munisamy, Manimegalai and A Bhagyaveni, M
- Subjects
- *
COGNITIVE radio , *RADIO networks , *COALITIONS , *DETECTION alarms , *SIGNAL-to-noise ratio - Abstract
Summary: Effective, reliable, and rapid spectrum sensing in concordance with maximum throughput efficiency is necessary at the cognitive radio network to maximize the network's utilization. An adaptive coalition using Kullback–Leibler divergence (KLD)‐based cooperative spectrum sensing scheme (KLDCSS) with a nominal sensing time to optimize the spectral efficiency is presented in this paper. In a coalition‐based spectrum sensing scheme, cooperating secondary users (SUs) are allocated to different coalitions to sense different Primary User (PU) channels. The same cognitive users allocated to sense the primary user channels are allotted to detect in different sensing phases. In the proposed sensing scheme, the cooperating SUs are adaptively allocated to different coalitions to maximize the throughput efficiency. Collaborating cognitive users with a standard signal‐to‐noise ratio and distinct energy levels that are characterized by the localized probabilities of detection and false alarm is considered. An adaptive coalition and user allocation algorithm are proposed to optimize the average opportunistic throughput and diminish the sensing overhead. The expression for throughput efficiency in terms of average throughput and sensing accuracy is derived. In this scheme, the SU computes the log‐likelihood ratio (LLR) from the acquired sample and forwards the same to the fusion center. The fusion center then accumulates the LLR values computed by the SU to estimate the likelihood of spectrum availability. Simulation results highlighting the competitive edge of the proposed scheme, with higher throughput efficiency over various numbers of SUs, coalitions, and signal‐to‐noise ratios, are presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Unveiling Malicious Network Flows Using Benford's Law.
- Author
-
Fernandes, Pedro, Ciardhuáin, Séamus Ó, and Antunes, Mário
- Subjects
- *
BENFORD'S law (Statistics) , *COMPUTER network security , *BAYES' theorem , *COMPUTER network traffic , *TRAFFIC flow - Abstract
The increasing proliferation of cyber-attacks threatening the security of computer networks has driven the development of more effective methods for identifying malicious network flows. The inclusion of statistical laws, such as Benford's Law, and distance functions, applied to the first digits of network flow metadata, such as IP addresses or packet sizes, facilitates the detection of abnormal patterns in the digits. These techniques also allow for quantifying discrepancies between expected and suspicious flows, significantly enhancing the accuracy and speed of threat detection. This paper introduces a novel method for identifying and analyzing anomalies within computer networks. It integrates Benford's Law into the analysis process and incorporates a range of distance functions, namely the Mean Absolute Deviation (MAD), the Kolmogorov–Smirnov test (KS), and the Kullback–Leibler divergence (KL), which serve as dispersion measures for quantifying the extent of anomalies detected in network flows. Benford's Law is recognized for its effectiveness in identifying anomalous patterns, especially in detecting irregularities in the first digit of the data. In addition, Bayes' Theorem was implemented in conjunction with the distance functions to enhance the detection of malicious traffic flows. Bayes' Theorem provides a probabilistic perspective on whether a traffic flow is malicious or benign. This approach is characterized by its flexibility in incorporating new evidence, allowing the model to adapt to emerging malicious behavior patterns as they arise. Meanwhile, the distance functions offer a quantitative assessment, measuring specific differences between traffic flows, such as frequency, packet size, time between packets, and other relevant metadata. Integrating these techniques has increased the model's sensitivity in detecting malicious flows, reducing the number of false positives and negatives, and enhancing the resolution and effectiveness of traffic analysis. Furthermore, these techniques expedite decisions regarding the nature of traffic flows based on a solid statistical foundation and provide a better understanding of the characteristics that define these flows, contributing to the comprehension of attack vectors and aiding in preventing future intrusions. The effectiveness and applicability of this joint method have been demonstrated through experiments with the CICIDS2017 public dataset, which was explicitly designed to simulate real scenarios and provide valuable information to security professionals when analyzing computer networks. The proposed methodology opens up new perspectives in investigating and detecting anomalies and intrusions in computer networks, which are often attributed to cyber-attacks. This development culminates in creating a promising model that stands out for its effectiveness and speed, accurately identifying possible intrusions with an F1 of nearly 80 % , a recall of 99.42 % , and an accuracy of 65.84 % . [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Efficient First-Order Algorithms for Large-Scale, Non-Smooth Maximum Entropy Models with Application to Wildfire Science.
- Author
-
Provencher Langlois, Gabriel, Buch, Jatan, and Darbon, Jérôme
- Subjects
- *
OPTIMIZATION algorithms , *DISTRIBUTION (Probability theory) , *PARAMETER estimation , *STATISTICAL models , *STATISTICS - Abstract
Maximum entropy (MaxEnt) models are a class of statistical models that use the maximum entropy principle to estimate probability distributions from data. Due to the size of modern data sets, MaxEnt models need efficient optimization algorithms to scale well for big data applications. State-of-the-art algorithms for MaxEnt models, however, were not originally designed to handle big data sets; these algorithms either rely on technical devices that may yield unreliable numerical results, scale poorly, or require smoothness assumptions that many practical MaxEnt models lack. In this paper, we present novel optimization algorithms that overcome the shortcomings of state-of-the-art algorithms for training large-scale, non-smooth MaxEnt models. Our proposed first-order algorithms leverage the Kullback–Leibler divergence to train large-scale and non-smooth MaxEnt models efficiently. For MaxEnt models with discrete probability distribution of n elements built from samples, each containing m features, the stepsize parameter estimation and iterations in our algorithms scale on the order of O (m n) operations and can be trivially parallelized. Moreover, the strong ℓ 1 convexity of the Kullback–Leibler divergence allows for larger stepsize parameters, thereby speeding up the convergence rate of our algorithms. To illustrate the efficiency of our novel algorithms, we consider the problem of estimating probabilities of fire occurrences as a function of ecological features in the Western US MTBS-Interagency wildfire data set. Our numerical results show that our algorithms outperform the state of the art by one order of magnitude and yield results that agree with physical models of wildfire occurrence and previous statistical analyses of wildfire drivers. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. A novel multi-sensor hybrid fusion framework.
- Author
-
Du, Haoran, Wang, Qi, Zhang, Xunan, Qian, Wenjun, and Wang, Jixin
- Subjects
MULTISENSOR data fusion ,CONVOLUTIONAL neural networks ,FEATURE extraction ,DATABASES - Abstract
Multi-sensor data fusion has emerged as a powerful approach to enhance the accuracy and robustness of diagnostic systems. However, effectively integrating multiple sensor data remains a challenge. To address this issue, this paper proposes a novel multi-sensor fusion framework. Firstly, a vibration signal weighted fusion rule based on Kullback–Leibler divergence-permutation entropy is introduced, which adaptively determines the weighting coefficients by considering the positional differences of different sensors. Secondly, a lightweight multi-scale convolutional neural network is designed for feature extraction and fusion of multi-sensor data. An ensemble classifier is employed for fault classification, and an improved hard voting strategy is proposed to achieve more reliable decision fusion. Finally, the superiority of the proposed method is validated using modular state detection data from the Kaggle database. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. A Structural Damage Localization Method Based on Empirical Probability Mass Function of ARMAX Model Residual and Kullback–Leibler Divergence.
- Author
-
Ma, Lingjuan, Wang, Fengdan, Ma, Xiao, Xiao, Yuzhu, Deng, Qingtian, Li, Xinbo, and Song, Xueli
- Subjects
- *
EXTREME value theory , *CHI-square distribution , *EMPIRICAL research , *PROBABILITY theory , *MOVING average process - Abstract
Damage localization is very significant in engineering applications. The existing method based on the chi-square distribution of an autoregressive moving average with exogenous inputs (ARMAX) model residual is not applicable for these realistic excitations except Gaussian excitation. To solve the above problem, this paper presents a structural damage localization method based on the empirical probability mass function (EPMF) of the ARMAX model residual and Kullback–Leibler (KL) divergence. In detail, we employ empirical data analysis (EDA) approach to estimate the EPMF of the ARMAX model residual of the data generated by the arbitrary excitation because EDA does not need any a priori knowledge about the model residual. Moreover, the KL divergence is introduced to measure the dissimilarity of the EPMFs in undamaged and damaged states to prove that our method is effective for arbitrary excitation. Finally, the semi-parametric extreme value theory is used to estimate the reliable threshold for localizing the damage. Numerical simulated and experimental results illustrate that the proposed method localizes the damage under different excitations, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Finding influential nodes in complex networks based on Kullback–Leibler model within the neighborhood.
- Author
-
Wang, Guan, Sun, Zejun, Wang, Tianqin, Li, Yuanzhe, and Hu, Haifeng
- Subjects
- *
FEDERATED learning , *NEIGHBORHOODS , *INFORMATION dissemination , *ENTROPY (Information theory) , *COMPUTER network security , *MACHINE learning - Abstract
As a research hot topic in the field of network security, the implementation of machine learning, such as federated learning, involves information interactions among a large number of distributed network devices. If we regard these distributed network devices and connection relationships as a complex network, we can identify the influential nodes to find the crucial points for optimizing the imbalance of the reliability of devices in federated learning system. This paper will analyze the advantages and disadvantages of existing algorithms for identifying influential nodes in complex networks, and propose a method from the perspective of information dissemination for finding influential nodes based on Kullback–Leibler divergence model within the neighborhood (KLN). Firstly, the KLN algorithm removes a node to simulate the scenario of node failure in the information dissemination process. Secondly, KLN evaluates the loss of information entropy within the neighborhood after node removal by establishing the KL divergence model. Finally, it assesses the damage influence of the removed node by integrating the network attributes and KL divergence model, thus achieving the evaluation of node importance. To validate the performance of KLN, this paper conducts an analysis and comparison of its results with those of 11 other algorithms on 10 networks, using SIR model as a reference. Additionally, a case study was undertaken on a real epidemic propagation network, leading to the proposal of management and control strategies for daily protection based on the influential nodes. The experimental results indicate that KLN effectively evaluates the importance of the removed node using KL model within the neighborhood, and demonstrate better accuracy and applicability across networks of different scales. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Hardness and Approximability of Dimension Reduction on the Probability Simplex.
- Author
-
Bruno, Roberto
- Subjects
- *
DISTRIBUTION (Probability theory) , *APPROXIMATION algorithms , *NP-complete problems , *INTEGERS , *PROBABILITY theory - Abstract
Dimension reduction is a technique used to transform data from a high-dimensional space into a lower-dimensional space, aiming to retain as much of the original information as possible. This approach is crucial in many disciplines like engineering, biology, astronomy, and economics. In this paper, we consider the following dimensionality reduction instance: Given an n-dimensional probability distribution p and an integer m < n , we aim to find the m-dimensional probability distribution q that is the closest to p, using the Kullback–Leibler divergence as the measure of closeness. We prove that the problem is strongly NP-hard, and we present an approximation algorithm for it. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Space-filling designs with a Dirichlet distribution for mixture experiments.
- Author
-
Jourdan, Astrid
- Subjects
REAL numbers ,PROBABILITY density function ,MIXTURES - Abstract
Uniform designs are widely used for experiments with mixtures. The uniformity of the design points is usually evaluated with a discrepancy criterion. In this paper, we propose a new criterion to measure the deviation between the design point distribution and a Dirichlet distribution. The support of the Dirichlet distribution, is defined by the set of d-dimensional vectors whose entries are real numbers in the interval [0,1] such that the sum of the coordinates is equal to 1. This support is suitable for mixture experiments. Depending on its parameters, the Dirichlet distribution allows symmetric or asymmetric, uniform or more concentrated point distribution. The difference between the empirical and the target distributions is evaluated with the Kullback–Leibler divergence. We use two methods to estimate the divergence: the plug-in estimate and the nearest-neighbor estimate. The resulting two criteria are used to build space-filling designs for mixture experiments. In the particular case of the flat Dirichlet distribution, both criteria lead to uniform designs. They are compared to existing uniformity criteria. The advantage of the new criteria is that they allow other distributions than uniformity and they are fast to compute. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Variational Online Learning Correlation Filter for Visual Tracking.
- Author
-
Wang, Zhongyang, Liu, Feng, and Deng, Lizhen
- Subjects
- *
ONLINE education , *ARTIFICIAL satellite tracking , *FILTERS & filtration , *RECOMMENDER systems , *INFORMATION filtering - Abstract
Recently, discriminative correlation filters (DCF) have been successfully applied for visual tracking. However, traditional DCF trackers tend to separately solve boundary effect and temporal degradation problems in the tracking process. In this paper, a variational online learning correlation filter (VOLCF) is proposed for visual tracking to improve the robustness and accuracy of the tracking process. Unlike previous methods, which use only first-order temporal constraints, this approach leads to overfitting and filter degradation. First, beyond the standard filter training requirement, our proposed VOLCF method introduces a model confidence term, which leverages the temporal information of adjacent frames during filter training. Second, to ensure the consistency of the temporal and spatial characteristics of the video sequence, the model introduces Kullback–Leibler (KL) divergence to obtain the second-order information of the filter. In contrast to traditional target tracking models that rely solely on first-order feature information, this approach facilitates the acquisition of a generalized connection between the previous and current filters. As a result, it incorporates joint-regulated filter updating. Through quantitative and qualitative analyses of the experiment, it proves that the VOLCF model has excellent tracking performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A deep clustering framework integrating pairwise constraints and a VMF mixture model.
- Author
-
Ma, He and Wu, Weipeng
- Subjects
- *
DATA augmentation , *ARTIFICIAL neural networks , *STATISTICAL sampling , *PATTERN perception , *GAUSSIAN distribution - Abstract
We presented a novel deep generative clustering model called Variational Deep Embedding based on Pairwise constraints and the Von Mises-Fisher mixture model (VDEPV). VDEPV consists of fully connected neural networks capable of learning latent representations from raw data and accurately predicting cluster assignments. Under the assumption of a genuinely non-informative prior, VDEPV adopted a von Mises-Fisher mixture model to depict the hyperspherical interpretation of the data. We defined and established pairwise constraints by employing a random sample mining strategy and applying data augmentation techniques. These constraints enhanced the compactness of intra-cluster samples in the spherical embedding space while improving inter-cluster samples' separability. By minimizing Kullback-Leibler divergence, we formulated a clustering loss function based on pairwise constraints, which regularized the joint probability distribution of latent variables and cluster labels. Comparative experiments with other deep clustering methods demonstrated the excellent performance of VDEPV. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Correlations of Cross-Entropy Loss in Machine Learning.
- Author
-
Connor, Richard, Dearle, Alan, Claydon, Ben, and Vadicamo, Lucia
- Subjects
- *
ARTIFICIAL neural networks , *EUCLIDEAN distance , *LOGITS , *DEEP learning - Abstract
Cross-entropy loss is crucial in training many deep neural networks. In this context, we show a number of novel and strong correlations among various related divergence functions. In particular, we demonstrate that, in some circumstances, (a) cross-entropy is almost perfectly correlated with the little-known triangular divergence, and (b) cross-entropy is strongly correlated with the Euclidean distance over the logits from which the softmax is derived. The consequences of these observations are as follows. First, triangular divergence may be used as a cheaper alternative to cross-entropy. Second, logits can be used as features in a Euclidean space which is strongly synergistic with the classification process. This justifies the use of Euclidean distance over logits as a measure of similarity, in cases where the network is trained using softmax and cross-entropy. We establish these correlations via empirical observation, supported by a mathematical explanation encompassing a number of strongly related divergence functions. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Viewpoint Selection for 3D-Games with f-Divergences.
- Author
-
Martin, Micaela Y., Sbert, Mateu, and Chover, Miguel
- Subjects
- *
UNCERTAINTY (Information theory) , *DIFFERENTIAL forms , *CAMCORDERS , *VIDEO games - Abstract
In this paper, we present a novel approach for the optimal camera selection in video games. The new approach explores the use of information theoretic metrics f-divergences, to measure the correlation between the objects as viewed in camera frustum and the ideal or target view. The f-divergences considered are the Kullback–Leibler divergence or relative entropy, the total variation and the χ 2 divergence. Shannon entropy is also used for comparison purposes. The visibility is measured using the differential form factors from the camera to objects and is computed by casting rays with importance sampling Monte Carlo. Our method allows a very fast dynamic selection of the best viewpoints, which can take into account changes in the scene, in the ideal or target view, and in the objectives of the game. Our prototype is implemented in Unity engine, and our results show an efficient selection of the camera and an improved visual quality. The most discriminating results are obtained with the use of Kullback–Leibler divergence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. Some New f-Divergence Measures and Their Basic Properties.
- Author
-
Dragomir, Silvestru Sever
- Subjects
INTEGRAL inequalities ,CONVEX functions - Abstract
In this paper, we introduce some new f-divergence measures that we call t-asymmetric/symmetric divergence measure and integral divergence measure, establish their joint convexity and provide some inequalities that connect these f-divergences to the classical one introduced by Csiszar in 1963. Applications for the dichotomy class of convex functions are provided as well. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Shannon’s Entropy, Kullback-Leibler Divergence, and Mutual Information in Diagnostic Systems
- Author
-
Balayla, Jacques and Balayla, Jacques
- Published
- 2024
- Full Text
- View/download PDF
40. Mutual Information and Kullback-Leibler Divergence in the Dempster-Shafer Theory
- Author
-
Shenoy, Prakash P., Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bi, Yaxin, editor, Jousselme, Anne-Laure, editor, and Denoeux, Thierry, editor
- Published
- 2024
- Full Text
- View/download PDF
41. Aggregation of the Distortion Models Induced by the KL Divergence and Euclidean Distance
- Author
-
Montes, Ignacio, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Ansari, Jonathan, editor, Fuchs, Sebastian, editor, Trutschnig, Wolfgang, editor, Lubiano, María Asunción, editor, Gil, María Ángeles, editor, Grzegorzewski, Przemyslaw, editor, and Hryniewicz, Olgierd, editor
- Published
- 2024
- Full Text
- View/download PDF
42. Gene Coexpression Analysis with Dirichlet Mixture Model: Accelerating Model Evaluation Through Closed-Form KL Divergence Approximation Using Variational Techniques
- Author
-
Pal, Samyajoy, Heumann, Christian, Einbeck, Jochen, editor, Maeng, Hyeyoung, editor, Ogundimu, Emmanuel, editor, and Perrakis, Konstantinos, editor
- Published
- 2024
- Full Text
- View/download PDF
43. Exploring the Role of Entropy in Music Classification
- Author
-
Ronnie, J. Bryan, Sharma, V. Harish, Angappan, R. Aravind, Srinivasan, R., Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Chakravarthi, Bharathi Raja, editor, B, Bharathi, editor, García Cumbreras, Miguel Ángel, editor, Jiménez Zafra, Salud María, editor, Subramanian, Malliga, editor, Shanmugavadivel, Kogilavani, editor, and Nakov, Preslav, editor
- Published
- 2024
- Full Text
- View/download PDF
44. Dimensionality Reduction Using Band Optimisation
- Author
-
Paul, Arati, Chaki, Nabendu, Paul, Arati, and Chaki, Nabendu
- Published
- 2024
- Full Text
- View/download PDF
45. Approximate bregman proximal gradient algorithm for relatively smooth nonconvex optimization
- Author
-
Takahashi, Shota and Takeda, Akiko
- Published
- 2024
- Full Text
- View/download PDF
46. Parameter Estimation for the Fractional Hawkes Process
- Author
-
Habyarimana, Cassien, Aduda, Jane A., and Scalas, Enrico
- Published
- 2024
- Full Text
- View/download PDF
47. Simple variational inference based on minimizing Kullback–Leibler divergence
- Author
-
Nakamura, Ryo, Yuasa, Tomooki, Amaba, Takafumi, and Fujiki, Jun
- Published
- 2024
- Full Text
- View/download PDF
48. Assessing copula models for mixed continuous-ordinal variables
- Author
-
Pan Shenyi and Joe Harry
- Subjects
parametric copula ,empirical beta copula ,kullback-leibler divergence ,location-scale mixture models ,normal scores ,ordinal regression ,polyserial correlation ,primary 62h05 ,secondary 62h12 ,Science (General) ,Q1-390 ,Mathematics ,QA1-939 - Abstract
Vine pair-copula constructions exist for a mix of continuous and ordinal variables. In some steps, this can involve estimating a bivariate copula for a pair of mixed continuous-ordinal variables. To assess the adequacy of copula fits for such a pair, diagnostic and visualization methods based on normal score plots and conditional Q–Q plots are proposed. The former uses a latent continuous variable for the ordinal variable. The methods are applied to data generated from some existing probability models for a mixed continuous-ordinal variable pair, and for such models, Kullback-Leibler divergence is used to assess whether simple parametric copula families can provide adequate fits. The effectiveness of the proposed visualization and diagnostic methods is illustrated on a dataset.
- Published
- 2024
- Full Text
- View/download PDF
49. Study on the combination of virtual machine tools and wearable vibration devices for operators experiencing cutting forces in the milling process
- Author
-
Shang-Hsien Liu, Bo-Cheng Luo, Yung-Chou Kao, and Guo-Hua Feng
- Subjects
Virtual machine tools ,Wearable vibration devices ,Chatter ,Milling process ,Kullback–Leibler divergence ,Medicine ,Science - Abstract
Abstract The primary goal of this study is to develop a wearable system for providing CNC machine operators with visual and tactile perception of triaxial cutting forces, thereby assisting operators in industrial environments to enhance work efficiency and prevent mechanical failures. To achieve this goal, we successfully integrated a virtual machining tool simulator with the remote-control wearable system (RCWS). Using the ‘King Path’ milling parameters, we employed the simulation software developed by the AIM-HI team to calculate static and dynamic cutting forces, converting this data into vibrational commands for the RCWS to generate corresponding tactile feedback. Furthermore, we conducted extensive experiments, testing various data conversion methods, including three sampling techniques and two data compression strategies, aiming to provide accurate tactile feedback related to cutting forces under different operating conditions.
- Published
- 2024
- Full Text
- View/download PDF
50. Privacy and security trade‐off in cyber‐physical systems: An information theory‐based framework.
- Author
-
Wu, Lihan, Wang, Haojun, Liu, Kun, Zhao, Liying, and Xia, Yuanqing
- Subjects
- *
CYBER physical systems , *ALARMS , *INFORMATION storage & retrieval systems , *PRIVACY , *HATE crimes , *INFORMATION theory , *FALSE alarms - Abstract
This article investigates the trade‐off between privacy and security in cyber‐physical systems, with the goal of designing a privacy‐preserving mechanism based on information theory. Considering the unreliability of the communication channel, we assume that the private data is vulnerable to eavesdropping and bias injection attacks. To maintain privacy, the system is equipped with a privacy‐preserving mechanism achieved by injecting Gaussian‐type privacy noise into transmitted data, which inevitably leads to degraded detecting performance. Therefore, we investigate the trade‐off between privacy level and detection performance, where the privacy level and the detection performance are measured by mutual information and Kullback–Leibler divergence, respectively. Then, the optimal privacy noise is obtained by solving a convex optimization problem for maximizing the privacy degree and constraining a bound on detection performance degradation. Furthermore, to optimize the detection performance, another convex optimization problem is proposed to minimize both the false alarm rate and the missed alarm rate while guaranteeing a level of the privacy. Finally, a numerical example of the vehicle tracking problem is adopted to illustrate the effectiveness of the designed framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.