52 results
Search Results
2. Triple perturbed consistent matrix and the efficiency of its principal right eigenvector.
- Author
-
Fernandes, Rosário and Palheira, Susana
- Subjects
- *
MATRICES (Mathematics) , *DECISION making - Abstract
Let A be a pairwise comparison matrix obtained from a consistent one by perturbing three entries above the main diagonal, x , y , z , and the corresponding reciprocal entries, in a way that there is a submatrix of size 2 containing the three perturbed entries and not containing a diagonal entry. In this paper we describe the relations among x , y , z with which A always has its principal right eigenvector efficient. Previously, and only for a few cases of this problem, R. Fernandes and S. Furtado (2022) proved the efficiency of the principal right eigenvector of A. In this paper, we continue to use the strong connectivity of a certain digraph associated with A and its principal right eigenvector to characterize the vector efficiency. For completeness, we show that the existence of a sink in this digraph is equivalent to the inefficiency of the principal right eigenvector of A. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
3. Desirable gambles based on pairwise comparisons.
- Author
-
Moral, Serafín
- Subjects
- *
GAMBLING - Abstract
This paper proposes a model for imprecise probability information based on bounds on probability ratios, instead of bounds on events. This model is studied in the language of coherent sets of desirable gambles, which provides an elegant mathematical formulation and a more expressive power. The paper provides methods to check avoiding sure loss and coherence, and to compute the natural extension. The relationships with other formalisms such as imprecise multiplicative preferences, the constant odd ratio model, or comparative probability are analyzed. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Multidimensional fuzzy sets: Negations and an algorithm for multi-attribute group decision making.
- Author
-
Santiago, Landerson and Bedregal, Benjamin
- Subjects
- *
FUZZY sets , *GROUP decision making , *REAL numbers , *ALGORITHMS - Abstract
Multidimensional fuzzy sets (MFS) is a new extension of fuzzy sets on which the membership values of an element in the universe of discourse are increasingly ordered vectors on the set of real numbers in the interval [ 0 , 1 ]. This paper aims to investigate fuzzy negations on the set of increasingly ordered vectors on [ 0 , 1 ] , i.e. on L ∞ ([ 0 , 1 ]) , MFN in short, with respect to some partial order. In this paper we study partial orders, giving special attention to admissible orders on L n ([ 0 , 1 ]) and L ∞ ([ 0 , 1 ]). In addition, we study the possibility of existence of strong multidimensional fuzzy negations and some properties and methods to construct such operators. In particular, we define the ordinal sums of n-dimensional negations and ordinal sums of multidimensional fuzzy negations on a multidimensional product order. A multi-attribute group decision making algorithm is presented. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
5. Estimating the coverage measure and the area explored by a line-sweep sensor on the plane.
- Author
-
Costa Vianna, Maria, Goubault, Eric, Jaulin, Luc, and Putot, Sylvie
- Subjects
- *
TOPOLOGICAL degree , *INTERVAL analysis , *TOPOLOGICAL entropy , *DETECTORS - Abstract
This paper presents a method for determining the area explored by a line-sweep sensor during an area-covering mission in a two-dimensional plane. Accurate knowledge of the explored area is crucial for various applications in robotics, such as mapping, surveillance, and coverage optimization. The proposed method leverages the concept of coverage measure of the environment and its relation to the topological degree in the plane, to estimate the extent of the explored region. In addition, we extend the approach to uncertain coverage measure values using interval analysis. This last contribution allows for a guaranteed characterization of the explored area, essential considering the often critical character of area-covering missions. Finally, this paper also proposes a novel algorithm for computing the topological degree in the 2-dimensional plane, for all the points inside an area of interest, which differs from existing solutions that compute the topological degree for single points. The applicability of the method is evaluated through a real-world experiment. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Medical decision support in the light of interactive granular computing: Lessons from the Ovufriend project.
- Author
-
Dutta, Soma, Skowron, Andrzej, and Sosnowski, Łukasz
- Subjects
- *
GRANULAR computing , *ARTIFICIAL intelligence , *DECISION support systems , *INDIVIDUALIZED medicine , *DECISION making , *COGNITIVE computing - Abstract
The main aim of the paper is to discuss the architecture for the future Intelligent Systems (IS's) and Decision Support Systems (DS's) dealing with complex phenomena such as supporting medical decisions (diagnosis and therapy) and to emphasize challenges in designing such systems. More precisely, the paper presents arguments for developing a specialized computing model based on the interactive granular computing paradigm which can help to design IS's and DS's more close to the prototypes of real life decision making. In this regard, the paper brings to the fore different experiences faced during designing other medical IS's or DS's.As a starting step, the paper considers the experience of developing the OvuFriend platform and outlines some possible extension of it in the framework of the proposed architecture on the basis of Interactive Granular Computing (IGrC) model. Specifically, our attempt is to analyze a scheme, which is being used in the platform of OvuFriend for determining health risks and possibilities of a woman to conceive a child, from the perspective of IGrC. The target of the paper is two fold. Firstly, to show how the underlying AI algorithm of this scheme can be related with the notion of computing in the context of IGrC. Secondly, to identify possible extensions of the existing scheme so that it becomes more dynamic, interactive, and close to personalized medicine. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
7. Tuning fuzzy SPARQL queries.
- Author
-
Almendros-Jiménez, Jesús M., Becerra-Terón, Antonio, Moreno, Ginés, and Riaza, José A.
- Subjects
- *
SPARQL (Computer program language) , *SEMANTIC Web , *DATABASES , *LOGIC programming , *WEB services , *PROGRAMMING languages , *VIRTUAL communities - Abstract
During the last years, the study of fuzzy database query languages has attracted the attention of many researchers. In this line of research, our group has proposed and developed FSA-SPARQL (Fuzzy Sets and Aggregators based SPARQL), which is a fuzzy extension of the Semantic Web query language SPARQL. FSA-SPARQL works with fuzzy RDF datasets and allows the definition of fuzzy queries involving fuzzy conditions through fuzzy connectives and aggregators. However, there are two main challenges to be solved for the practical applicability of FSA-SPARQL. The first problem is the lack of fuzzy RDF data sources. The second is how to customize fuzzy queries on fuzzy RDF data sources. Our research group has also recently proposed a fuzzy logic programming language called F A S I L L that offers powerful tuning capabilities that can accept applications in many fields. The purpose of this paper is to show how the F A S I L L tuning capabilities serve to accomplish in a unified framework both challenges in FSA-SPARQL : data fuzzification and query customization. More concretely, from a FSA-SPARQL to F A S I L L transformation, data fuzzification and query customization in FSA-SPARQL become F A S I L L tuning problems. We have validated the approach with queries against datasets from online communities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
8. Testing the fit of data and external sets via an imprecise Sargan-Hansen test.
- Author
-
Jann, Martin
- Subjects
- *
FALSE positive error , *INFERENTIAL statistics , *STATISTICAL models , *STOCHASTIC orders , *GENERALIZED method of moments , *EXPERIMENTAL design - Abstract
In empirical sciences such as psychology, the term cumulative science mostly refers to the integration of theories, while external (prior) information may also be used in statistical inference. This external information can be in the form of statistical moments and is subject to various types of uncertainty, e.g., because it is estimated, or because of qualitative uncertainty due to differences in study design or sampling. Before using it in statistical inference, it is therefore important to test whether the external information fits a new data set, taking into account its uncertainties. As a frequentist approach, the Sargan-Hansen test from the generalized method of moments framework is used in this paper. It tests, given a statistical model, whether data and point-wise external information are in conflict. A separability result is given that simplifies the Sargan-Hansen test statistic in most cases. The Sargan-Hansen test is then extended to the imprecise scenario with (estimated) external sets using stochastically ordered credal sets. Furthermore, an exact small sample version is derived for normally distributed variables. As a Bayesian approach, two prior-data conflict criteria are discussed as a test for the fit of external information to the data. Two simulation studies are performed to test and compare the power and type I error of the methods discussed. Different small sample scenarios are implemented, varying the moments used, the level of significance, and other aspects. The results show that both the Sargan-Hansen test and the Bayesian criteria control type I errors while having sufficient or even good power. To facilitate the use of the methods by applied scientists, easy-to-use R functions are provided in the R script in the supplementary materials. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
9. Distribution-free Inferential Models: Achieving finite-sample valid probabilistic inference, with emphasis on quantile regression.
- Author
-
Cella, Leonardo
- Subjects
- *
QUANTILE regression , *CONFIDENCE regions (Mathematics) , *PROBABILITY theory - Abstract
This paper presents a novel distribution-free Inferential Model (IM) construction that provides valid probabilistic inference across a broad spectrum of distribution-free problems, even in finite sample settings. More specifically, the proposed IM has the capability to assign (imprecise) probabilities to assertions of interest about any feature of the unknown quantities under examination, and these probabilities are well-calibrated in a frequentist sense. It is also shown that finite-sample confidence regions can be derived from the IM for any such features. Particular emphasis is placed on quantile regression, a domain where uncertainty quantification often takes the form of set estimates for the regression coefficients in applications. Within this context, the IM facilitates the acquisition of these set estimates, ensuring they are finite-sample confidence regions. It also enables the provision of finite-sample valid probabilistic assignments for any assertions of interest about the regression coefficients. As a result, regardless of the type of uncertainty quantification desired, the proposed framework offers an appealing solution to quantile regression. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
10. Being Bayesian about learning Bayesian networks from ordinal data.
- Author
-
Grzegorczyk, Marco
- Subjects
- *
BAYESIAN analysis , *DIRECTED acyclic graphs , *MARKOV chain Monte Carlo - Abstract
In this paper we propose a Bayesian approach for inferring Bayesian network (BN) structures from ordinal data. Our approach can be seen as the Bayesian counterpart of a recently proposed frequentist approach, referred to as the 'ordinal structure expectation maximization' (OSEM) method. Like for the OSEM method, the key idea is to assume that each ordinal variable originates from a Gaussian variable that can only be observed in discretized form, and that the dependencies in the latent Gaussian space can be modeled by BNs; i.e. by directed acyclic graphs (DAGs). Our Bayesian method combines the 'structure MCMC sampler' for DAG posterior sampling, a slightly modified version of the 'Bayesian metric for Gaussian networks having score equivalence' (BGe score), the concept of the 'extended rank likelihood', and a recently proposed algorithm for posterior sampling the parameters of Gaussian BNs. In simulation studies we compare the new Bayesian approach and the OSEM method in terms of the network reconstruction accuracy. The empirical results show that the new Bayesian approach leads to significantly improved network reconstruction accuracies. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
11. Attribute reduction for heterogeneous data based on monotonic relative neighborhood granularity.
- Author
-
Dai, Jianhua, Zhu, Zhilin, Li, Min, Zou, Xiongtao, and Zhang, Chucai
- Subjects
- *
DATA reduction , *LOGARITHMIC functions , *ROUGH sets , *ENTROPY (Information theory) , *INFORMATION measurement - Abstract
The neighborhood rough set model serves as an important tool for handling attribute reduction tasks involving heterogeneous attributes. However, measuring the relationship between conditional attributes and decision in the neighborhood rough set model is a crucial issue. Most studies have utilized neighborhood information entropy to measure the relationship between attributes. When using neighborhood conditional information entropy to measure the relationships between the decision and conditional attributes, it lacks monotonicity, consequently affecting the rationality of the final attribute reduction subset. In this paper, we introduce the concept of neighborhood granularity and propose a new form of relative neighborhood granularity to measure the relationship between the decision and conditional attributes, which exhibits monotonicity. Moreover, our approach for measuring neighborhood granularity avoids the logarithmic function computation involved in neighborhood information entropy. Finally, we conduct comparative experiments on 12 datasets using two classifiers to compare the results of attribute reduction with six other attribute reduction algorithms. The comparison demonstrates the advantages of our measurement approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Conditional independence collapsibility for acyclic directed mixed graph models.
- Author
-
Li, Weihua, Sun, Yi, and Heng, Pei
- Subjects
- *
DIRECTED acyclic graphs , *LATENT variables , *DIRECTED graphs , *GRAPHIC methods in statistics , *INFERENTIAL statistics , *GRAPH theory , *COMPUTATIONAL complexity - Abstract
Collapsibility refers to the property that, when marginalizing over some variables that are not of interest from the full model, the resulting marginal model of the remaining variables is equivalent to the local model induced by the subgraph on these variables. This means that when the marginal model satisfies collapsibility, statistical inference results based on the marginal model and the local model are consistent. This has significant implications for small-sample data, modeling latent variable data, and reducing the computational complexity of statistical inference. This paper focuses on studying the conditional independence collapsibility of acyclic directed mixed graph (ADMG) models. By introducing the concept of inducing paths in ADMGs and exploring its properties, the conditional independence collapsibility of ADMGs is characterized equivalently from both graph theory and statistical perspectives. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. Consequence relations and data science: From Galois mappings to data interpretation.
- Author
-
Wolski, Marcin and Gomolińska, Anna
- Subjects
- *
DATA science , *DATA mapping , *INFORMATION storage & retrieval systems , *INFORMATION processing , *ROUGH sets - Abstract
The concepts of a consequence relation and operation, though very abstract and theoretical, may be related to specific categories of information systems (i.e. mathematical frontends of data tables); as it has been demonstrated by D. Vakarelov, there exist correspondence between Pawlak information systems and Scott as well as Tarski consequence operations. This line of research goes (via representation) from abstract concepts to data. In this paper we would like to take the opposite direction: from data (via construction) to consequence relations. The main emphasis is laid here not on general categories of consequence relations (e.g. Scott or Tarski ones) but on concrete operators that can be retrieved from information systems (e.g. different examples of Scott consequence). To this end, we employ Galois connections and adjunctions (en masse called Galois mappings) and study the consequence relations that can be built via these maps. The main novelty of our research comes from the investigation of consequence relations induced by adjunctions rather than monotone Galois connections, which have been the main subject of studies so far. Surprisingly, the operations obtained from adjunctions possess a number of counter-intuitive properties, which (in turn) request some intelligible interpretations. And this is our next objective: to make sense of these consequence relations in the context of information processing. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
14. Shortest-length and coarsest-granularity constructs vs. reducts: An experimental evaluation.
- Author
-
Lazo-Cortés, Manuel S., Sanchez-Diaz, Guillermo, and Almanza Ortega, Nelva N.
- Subjects
- *
ROUGH sets , *FEATURE selection , *DATA reduction - Abstract
In the domain of rough set theory, super-reducts represent subsets of attributes possessing the same discriminative power as the complete set of attributes when it comes to distinguishing objects across distinct classes in supervised classification problems. Within the realm of super-reducts, the concept of reducts holds significance, denoting subsets that are irreducible. Contrastingly, constructs, while serving the purpose of distinguishing objects across different classes, also exhibit the capability to preserve certain shared characteristics among objects within the same class. In essence, constructs represent a subtype of super-reducts that integrates information both inter-classes and intra-classes. Despite their potential, constructs have garnered comparatively less attention than reducts. Both reducts and constructs find application in the reduction of data dimensionality. This paper exposes key concepts related to constructs and reducts, providing insights into their roles. Additionally, it conducts an experimental comparative study between optimal reducts and constructs, considering specific criteria such as shortest length and coarsest granularity, and evaluates their performance using classical classifiers. The outcomes derived from employing seven classifiers on sixteen datasets lead us to propose that both coarsest granularity reducts and constructs prove to be effective choices for dimensionality reduction in supervised classification problems. Notably, when considering the optimality criterion of the shortest length, constructs exhibit clear superiority over reducts, which are found to be less favorable. Moreover, a comparative analysis was conducted between the results obtained using the coarsest granularity constructs and a technique from outside of rough set theory, specifically correlation-based feature selection. The former demonstrated statistically superior performance, providing further evidence of its efficacy in comparison. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Few-shot learning based on hierarchical feature fusion via relation networks.
- Author
-
Jia, Xiao, Mao, Yingchi, Pan, Zhenxiang, Wang, Zicheng, and Ping, Ping
- Subjects
- *
DEEP learning , *IMAGE recognition (Computer vision) , *MACHINE learning , *GRANULAR computing - Abstract
Few-shot learning, which aims to identify new classes with few samples, is an increasingly popular and crucial research topic in the machine learning. Recently, the development of deep learning has deepened the network structure of a few-shot model, thereby obtaining deeper features from the samples. This trend led to an increasing number of few-shot learning models pursuing more complex structures and deeper features. However, discarding shallow features and blindly pursuing the depth of sample feature levels is not reasonable. The features at different levels of the sample have different information and characteristics. In this paper, we propose a few-shot image classification model based on deep and shallow feature fusion and a coarse-grained relationship score network (HFFCR). First, we utilize networks with different depth structures as feature extractors and then fuse the two kinds of sample features. The fused sample features collect sample information at different levels. Second, we condense the fused features into a coarse-grained prototype point. Prototype points can better represent the information in this class and improve classification efficiency. Finally, we construct a relationship score network, concatenating the prototype points and query samples into a feature map and sending it into the network to calculate the relationship score. The classification criteria for learnable relationship scores reflect the information difference between the two samples. Experiments on three datasets show that HFFCR has advanced performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
16. Feature selection for multi-label learning based on variable-degree multi-granulation decision-theoretic rough sets.
- Author
-
Yu, Ying, Wan, Ming, Qian, Jin, Miao, Duoqian, Zhang, Zhiqiang, and Zhao, Pengfei
- Subjects
- *
ROUGH sets , *GRANULATION , *FEATURE selection , *RDF (Document markup language) - Abstract
Multi-label learning (MLL) suffers from the high-dimensional feature space teeming with irrelevant and redundant features. To tackle this, several multi-label feature selection (MLFS) algorithms have emerged as vital preprocessing steps. Nonetheless, existing MLFS methods have their shortcomings. Primarily, while they excel at harnessing label-feature relationships, they often struggle to leverage inter-feature information effectively. Secondly, numerous MLFS approaches overlook the uncertainty in the boundary domain, despite its critical role in identifying high-quality features. To address these issues, this paper introduces a novel MLFS algorithm, named VMFS. It innovatively integrates multi-granulation rough sets with three-way decision, leveraging multi-granularity decision-theoretic rough sets (MGDRS) with variable degrees for optimal performance. Initially, we construct coarse decision (RDC), fine decision (RDF), and uncertainty decision (RDU) functions for each object based on MGDRS with variable degrees. These decision functions then quantify the dependence of attribute subsets, considering both deterministic and uncertain aspects. Finally, we employ the dependency to assess attribute importance and rank them accordingly. Our proposed method has undergone rigorous evaluation on various standard multi-label datasets, demonstrating its superiority. Experimental results consistently show that VMFS significantly outperforms other algorithms on most datasets, underscoring its effectiveness and reliability in multi-label learning tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
17. A probabilistic modal logic for context-aware trust based on evidence.
- Author
-
Aldini, Alessandro, Curzi, Gianluca, Graziani, Pierluigi, and Tagliaferri, Mirko
- Subjects
- *
MODAL logic , *NEUROLINGUISTICS - Abstract
Trust is an extremely helpful construct when reasoning under uncertainty. Thus, being able to logically formalize the concept in a suitable language is important. However, doing so is problematic for three reasons. First, in order to keep track of the contextual nature of trust, situation trackers are required inside the language. Second, in order to produce trust estimations, agents rely on evidence personally gathered or reported by other agents; this requires elements in the language that can track which agents are used as referrals and how much weight is placed on their opinions. Finally, trust is subjective in nature, thus, personal thresholds are needed to track the trust-propensity of different evaluators. In this paper we propose an interpretation of a probabilistic modal language à la Hennessy-Milner in order to capture a context-aware quantitative notion of trust based on evidence. We also provide an axiomatization for the language and prove soundness, completeness, and decidability results. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. An auto-weighted enhanced horizontal collaborative fuzzy clustering algorithm with knowledge adaption mechanism.
- Author
-
Yang, Huilin, Yu, Fusheng, Pedrycz, Witold, Yang, Zonglin, Chang, Jiaqi, and Wang, Jiayin
- Subjects
- *
HYDROCHLOROFLUOROCARBONS , *FUZZY sets - Abstract
Among the multi-source data clustering tasks, there is a kind of frequently encountered tasks where only one of the multi-source datasets is available for sake of privacy and other reasons. The only available dataset is called local dataset, and the other are called external datasets. The horizontal collaborative fuzzy clustering (HCFC) model is a typical one that can deal with such clustering tasks. In HCFC, each external dataset is used through the knowledge mined from it rather than itself. The knowledge expressed as a knowledge partition matrix is fused into the clustering process of the local dataset. Reviewing the existing HCFC models, we can find three issues that need improvement. Firstly, the existing HCFC models quantify the collaboration contribution of each external knowledge by a hyperparameter at dataset-level, and moreover, do not distinguish the collaboration contributions of objects in the same external dataset. This may lead to counterintuitive clustering results. Focused on this issue, this paper proposes an enhanced HCFC (EHCFC) algorithm that extends the collaboration from dataset-level to object-level, and assigns different weights to objects based on the information amount provided by objects. Through EHCFC, a more flexible collaboration and a more intuitive clustering result can be reached. Secondly, the collaboration mechanisms of the existing HCFC models require that the dimensionalities of the partition matrices of external datasets and local dataset are the same, which makes the HCFC algorithms unable to work in many real situations. Focused on this limitation, a knowledge adaption mechanism based on relative entropy and spectral clustering is proposed resulting in a further refined EHCFC-KA algorithm, i.e., EHCFC with knowledge adaption. The proposed knowledge adaption mechanism makes both the HCFC algorithms and the EHCFC algorithm effective and successful in more application scenarios. Finally, we define two indexes in terms of consistency (the consistency of the clustering result with external knowledge) to evaluate the performance of collaborative clustering. Experiments on synthetic datasets and UCI public datasets demonstrate that the proposed EHCFC and EHCFC-KA algorithms outperform the existing HCFC algorithms and achieve significantly better intuitive collaborative clustering performance. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. Uncertainty quantification in logistic regression using random fuzzy sets and belief functions.
- Author
-
Denœux, Thierry
- Subjects
- *
RANDOM sets , *FUZZY sets , *MONTE Carlo method , *RANDOM variables , *DISTRIBUTION (Probability theory) , *SET functions , *ERROR rates , *POLYNOMIAL chaos - Abstract
Evidential likelihood-based inference is a new approach to statistical inference in which the relative likelihood function is interpreted as a possibility distribution. By expressing new data as a function of the parameter and a random variable with known probability distribution, one then defines a random fuzzy set and an associated predictive belief function representing uncertain knowledge about future observations. In this paper, this approach is applied to binomial and multinomial regression. In the binomial case, the predictive belief function can be computed by numerically integrating the possibility distribution of the posterior probability. In the multinomial case, the solution is obtained by a combination of constrained nonlinear optimization and Monte Carlo simulation. In both cases, computations can be considerably simplified using a normal approximation to the relative likelihood. Numerical experiments show that decision rules based on predictive belief functions make it possible to reach lower error rates for different rejection rates, as compared to decisions based on posterior probabilities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. L-valued covering-based rough sets and corresponding decision-making applications.
- Author
-
El-Saady, Kamal, Rashed, Amal, and Temraz, Ayat A.
- Subjects
- *
ROUGH sets , *RESIDUATED lattices , *DECISION making , *NEIGHBORHOODS - Abstract
Considering L to be a complete residuated lattice, by introducing the notion of L -valued covering on an L -set (as a universe), and then an L -valued neighborhood based on it, we present the concept of L -valued covering-based rough sets. We mainly address the following issues in this paper: Firstly, we present four types of L -valued neighborhood operators and study some of their respective properties. Secondly, we construct L -valued lower (resp., upper) approximation operators and then discuss some of their properties. Finally, we try to propose a kind of MADM problem based on an L -valued covering-based rough set model (for L = [ 0 , 1 ]). [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Multi-label feature selection based on fuzzy rough sets with metric learning and label enhancement.
- Author
-
Cai, Mingjie, Yan, Mei, Wang, Pei, and Xu, Feng
- Subjects
- *
ROUGH sets , *FUZZY sets , *MACHINE learning , *FEATURE selection - Abstract
Multi-label feature selection based on fuzzy rough sets, as a key step of multi-label data preprocessing, has been widely concerned by scholars in recent years. Most of the existing multi-label feature selection algorithms directly treat labels as logical labels and use a single distance metric to describe similarity. However, the variability of label descriptions and the limitations of a single distance measure should not be overlooked. In this paper, we propose a fuzzy rough set model with metric learning and label enhancement. Specifically, we use a kernel membership label enhancement algorithm based on JS divergence to convert logical labels into numerical labels, which not only reflects the importance of different labels, but also takes into account the differences in label distribution. In addition, a multi-metric learning algorithm is proposed for multi-label learning, in which the metric distance function under the label space and feature space can be learned autonomously. Then, based on the proposed model, we propose a novel multi-label feature selection algorithm based on metric learning and fuzzy rough sets. On this basis, a fast multi-label feature selection algorithm is further designed to improve the computational efficiency. In the experiments, compared with other nine algorithms on real-world datasets, the results show the superiority of the proposed algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
22. Three-phase multi-criteria ranking considering three-way decision framework and criterion fuzzy concept.
- Author
-
Zhang, Kai and Dai, Jianhua
- Subjects
- *
FUZZY sets - Abstract
The criterion fuzzy concept refers to a fuzzy set that represents the decision-maker's subjective preference for each criterion within the universe of criteria. Addressing the challenge of ranking all alternatives based on a given criterion fuzzy concept is a novel research direction in the field of fuzzy multi-criteria ranking issues. This paper proposes a three-phase approach for multi-criteria ranking in fuzzy environments, which combines the criterion fuzzy concept and three-way decision thinking. The proposed approach not only analyzes the decision-making characteristics of all alternatives but also facilitates their ranking. During the first phase, a qualitative classification method based on the criterion fuzzy concept and ideal solutions is defined, which divides all alternatives into three independent decision sub-regions. During the second phase, by analyzing the priority relationships among the alternatives within every sub-region, three local ranking rules for alternatives are proposed to determine the ranking of alternatives in each classification region. During the third phase, the semantic relations among three classification regions are considered to give an overall ranking of all alternatives. Finally, combined with two existing quantitative ranking indicators, multiple data sets are employed to verify the feasibility and superiority of the proposed three-phase multi-criteria ranking approach. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
23. Feature selection of dominance-based neighborhood rough set approach for processing hybrid ordered data.
- Author
-
Chen, Jiayue and Zhu, Ping
- Subjects
- *
FEATURE selection , *ROUGH sets , *NEIGHBORHOODS - Abstract
Feature selection is a fundamental application of rough set theory in identifying significant features and reducing data dimensionality. For ordered data (OD), existing studies of feature selection mainly aim at ODs with specific criteria, i.e., single-valued, interval-valued, or set-valued criteria. However, these studies are inapplicable to ODs simultaneously including the three criteria, namely, hybrid ODs (HODs). To fill such a gap, this paper investigates feature selection of HODs using dominance-based neighborhood rough sets (DNRSs). Firstly, we introduce a kind of DNRS model for HODs, examine its properties, and establish its relationships with other dominance-based rough sets. Corresponding to DNRSs of two different target concepts in HODs, we propose feature selections based on approximation accuracies, and the two feature selections are proven to be equivalent by the complementarity property of DNRSs. For the computation of the proposed feature selection, we construct discernibility criterion set, which is then employed to define the family of approximation discernibility criterion sets (ADCSF) and its minimal description (MD-ADCSF). All reducts and the most discriminative reduct are computed through MD-ADCSF, and the algorithms of MD-ADCSF and the most discriminative reduct are achieved in matrix form. Finally, we verify validity and effectiveness of the two algorithms by comparison experiments on nine real UCI datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
24. Evaluating uncertainty with Vertical Barrier Models.
- Author
-
Miranda, Enrique, Pelessoni, Renato, and Vicig, Paolo
- Subjects
- *
NEIGHBORHOODS , *PROBABILITY theory - Abstract
Vertical Barrier Models (VBM) are a family of imprecise probability models that generalise a number of well known distortion/neighbourhood models (such as the Pari-Mutuel Model, the Linear-Vacuous Model, and others) while still being relatively simple. Several of their properties were established in previous works; in this paper we explore, in a finite framework, further facets of these models: their interpretation as neighbourhood models, the structure of their credal set in terms of maximum number of its extreme points, the result of merging operations with VBMs, the properties of their mass function, the conditions for VBMs to be belief functions or maxitive measures and the approximation of other models by VBMs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. Constructing overlap functions on bounded posets via multiplicative generators.
- Author
-
Lu, Jing and Zhao, Bin
- Subjects
- *
PARTIALLY ordered sets , *IMAGE processing , *AGGREGATION operators - Abstract
Overlap functions are an important class of aggregation operators on [0,1] that have been proposed for applications in image processing, classification, etc. Later, Paiva et al. lifted overlap functions on [0,1] to complete lattices. In this paper, we continue to study overlap functions on bounded posets so as to lift the continuity in the notion of overlap functions from [0,1] to bounded posets mainly from the topological aspects. More precisely, we introduce the notion of overlap functions on bounded posets. And then, we consider the multiplicative generator triples of overlap functions on bounded posets, and investigate constructions of overlap functions by multiplicative generator triples. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
26. Interval R-Sheffer strokes and interval fuzzy Sheffer strokes endowed with admissible orders.
- Author
-
Zhao, Yifan and Liu, Hua-Wen
- Subjects
- *
GENERALIZATION , *LOGIC - Abstract
Fuzzy Sheffer stroke is a new class of fuzzy connectives introduced by Baczyński et al., which generalizes the Sheffer stroke operation in classical logic. However, the investigation on interval extensions of fuzzy Sheffer strokes is still missing in the literature. To fill this gap, in this paper, we introduce two interval generalizations of fuzzy Sheffer strokes. Firstly, we propose the notion of interval R -Sheffer strokes based on interval directional monotonicity, studying their properties, characterization, representability and constructions. And then, we introduce the concept of interval fuzzy Sheffer strokes endowed with admissible orders. In particular, we present and compare three construction methods for interval fuzzy Sheffer strokes with respect to ≤ α , β orders. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. Aggregation of random elements over bounded lattices.
- Author
-
Baz, Juan, Díaz, Irene, and Montes, Susana
- Subjects
- *
RANDOM measures , *PARTIALLY ordered sets , *RANDOM variables , *RANDOM matrices , *RANDOM graphs , *PROBABILITY theory - Abstract
Aggregation functions are widely used to fuse information from different sources in a unique value. In many cases, the aggregated information is related to some experimental measure or random sampling of a population. In this direction, it is reasonable to consider aggregation of random elements. In this paper, the concept of aggregation functions of random elements over bounded lattices, which are measurable functions from a probability space to a bounded lattice, is presented. In particular, starting from a partially ordered set, a measurable space is constructed. Random elements are considered to be measurable functions from a probability space to the measurable space. The concept of aggregation of random elements over bounded lattices is defined by generalizing the monotonicity and the boundary conditions in terms of stochastic orders. Several types, such as the induced, random and degenerated aggregations of random elements over bounded lattices are defined and some coherence properties are studied. Particular examples regarding the aggregation of random variables, random graphs and random semi-positive matrices are provided. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Some notes on possibilistic randomisation with t-norm based joint distributions in strategic-form games.
- Author
-
Corsi, Esther Anna, Hosni, Hykel, and Marchioni, Enrico
- Subjects
- *
IDEMPOTENTS , *NASH equilibrium , *EXPECTED utility , *STRATEGY games , *TRIANGULAR norms - Abstract
This article continues the investigation started in [18] on the role of possibilistic mixed strategies in strategic-form games. In this earlier work we assumed, as standard in possibility theory, that joint possibility distributions were computed by combining possibilistic mixed strategies with the minimum t-norm. In this paper, we investigate the consequences of defining joint possibility distributions by using any continuous t-norm, with players' expected utilities based on the Choquet integral. We characterise under which conditions a pair of possibilistic mixed strategies is an equilibrium, generalising the results first presented in [18] , and also show that the set of equilibria in possibilistic mixed strategies depends on the set of idempotent elements of a t-norm and not just on the chosen t-norm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. Some thoughts about transfer learning. What role for the source domain?
- Author
-
Cornuéjols, A.
- Subjects
- *
LEARNING problems , *RAILROAD trains - Abstract
Transfer learning is called for when the training and test data do not share the same input distributions (P X S ≠ P X T) or/and not the same conditional ones (P Y | X S ≠ P Y | X T). In the most general case, the input spaces and/or output spaces can be different: X S ≠ X T and/or Y S ≠ Y T. However, most work assume that X S = X T. Furthermore, a common held assumption is that it is necessary that the source hypothesis be good on the source training data and that the "distance" between the source and the target domains be as small as possible in order to get a good (transferred) target hypothesis. This paper revisits the reasons for these beliefs and discusses the relevance of these conditions. An algorithm is presented which can deal with transfer learning problems where X S ≠ X T , and that furthermore brings a fresh perspective on the role of the source hypothesis (it does not have to be good) and on what is important in the distance between the source and the target domains (translations between them should belong to a limited set). Experiments illustrate the properties of the method and confirm the theoretical analysis. Determining beforehand a relevant source hypothesis remains an open problem, but the vista provided here helps understanding its role. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. The interior of inconsistency in a knowledge base.
- Author
-
Mu, Kedian
- Subjects
- *
KNOWLEDGE base , *SEMANTICS - Abstract
Looking inside local inconsistencies arising in different parts of a knowledge base may help us better frame the inconsistency of the knowledge base. Moreover, as possible changes of the inconsistency due to removing some formulas from the knowledge base, local inconsistencies play an important role in identifying contributions of formulas to the inconsistency in the knowledge base. In this paper, we focus on local inconsistencies arising in all inconsistent subsets of a knowledge base and their relations with the inconsistency in the whole knowledge base. We call inconsistencies in all the inconsistent subsets of a knowledge base the interior of inconsistency of the knowledge base, and characterize the interior of inconsistency in two directions. One is the distribution of local inconsistencies over the power set of the knowledge base and relations between them. It focuses on local appearances of the inconsistency in different parts of the knowledge base. The other is the hierarchical structure of the interior of inconsistency due to different deviations of local inconsistencies from the inconsistency of the knowledge base. This direction is more interested in the aspect of each local inconsistency as a potential change of the inconsistency due to removing some formulas. Then we consider the interior of inconsistency of a knowledge base from syntactic and semantic perspectives, respectively. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. RCAviz: Exploratory search in multi-relational datasets represented using relational concept analysis.
- Author
-
Huchard, Marianne, Martin, Pierre, Muller, Emile, Poncelet, Pascal, Raveneau, Vincent, and Sallaberry, Arnaud
- Subjects
- *
CONCEPTUAL structures , *NAVIGATION , *VISUAL analytics - Abstract
The conceptual structures built with Formal Concept Analysis (FCA) and its extensions are appropriate constructs for supporting Exploratory Search (ES). FCA indeed classifies a set of objects described by Boolean attributes in a concept lattice which is prone to (intra-lattice) navigation. Relational Concept Analysis (RCA), for its part, classifies several sets of objects connected through multiple binary relationships by using logical operators (quantifiers) which can be approximate. The output is a set of interconnected concept lattices, thus adding inter-lattice navigation opportunities. In this paper, we describe the web platform RCAviz, which aims to support such intra- and inter-lattice navigation. The user can select a subset of objects and attributes as a starting point for navigation. Then RCAviz shows the associated concept and its close intra- and inter-lattice neighbors. The user can access to the objects and attributes introduced and inherited in a concept. They then can navigate, i.e. zoom and pan the current view, and move from one concept to another. Additional views show the previous and the next conceptual structures, as well as an history which allows the user to browse its navigation. A navigation example is shown on a real dataset to illustrate the potential of RCAviz for ES. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Stochastic dominance and statistical preference for random variables coupled by arbitrary copulas.
- Author
-
Couso, Inés and Sánchez, Luciano
- Subjects
- *
RANDOM variables , *STOCHASTIC dominance , *COPULA functions , *STOCHASTIC orders , *CUMULATIVE distribution function , *MARGINAL distributions - Abstract
Recently, results have been published showing that first order stochastic dominance implies statistical preference and diff-stochastic dominance, when the copula relating the compared variables is either Archimedean, the product copula, or one of the Fréchet-Hoeffding bounds. In the present paper, we rely on known results on multivariate stochastic orders to extend these results and simplify the proofs. The results are expanded in two directions: First, we show that it suffices for the copula to be symmetric. Second, we reveal that first stochastic dominance entails a wider range of stochastic preferences beyond statistical preference and diff-stochastic dominance. We further analyze whether first stochastic dominance implies statistical preference for the case of asymmetric copulas. We observe that, when at least one of the marginal cumulative distribution functions has no discontinuity jumps, the family of asymmetric copulas for which the implication holds is at least as large as the one for which it does not. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
33. Relevance, recovery and recuperation: A prelude to ring withdrawal.
- Author
-
Fermé, Eduardo, Garapa, Marco, Nayak, Abhaya, and Reis, Maurício D.L.
- Subjects
- *
CONTRACTION operators , *VON Neumann algebras , *AXIOMS - Abstract
In this paper, we introduce recuperative withdrawals , belief change operators that satisfy recuperation, a postulate weaker than recovery, all the AGM postulates for contraction except recovery and another postulate which is a slightly stronger condition than conjunctive inclusion. Furthermore, we present a constructive definition for a class of operators —named ring withdrawals — which are such that the outcome of a ring withdrawal of a belief set K by a sentence α is obtained by adding to the set of most plausible models ‖ K ‖ all the worlds which are as close to ‖ K ‖ as its closest ¬ α -worlds. Ring withdrawals satisfy the Lindström and Rabinowicz's interpolation thesis. We show that the classes of recuperative withdrawals and of ring withdrawals are identical. Additionally we show that the class of ring withdrawals is not contained in and does not contain the class of AGM contractions or the class of severe withdrawals. Finally we present methods for defining an operator of ring withdrawal by means of a severe withdrawal operator and by means of an AGM contraction operator, and vice-versa. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
34. Stochastically ordered aggregation operators.
- Author
-
Baz, Juan, Pellerey, Franco, Díaz, Irene, and Montes, Susana
- Subjects
- *
STOCHASTIC orders , *RANDOM variables , *LINEAR orderings , *AGGREGATION operators , *PROBABILITY theory , *ORDINATION - Abstract
In aggregation theory, there exists a large number of aggregation functions that are defined in terms of rearrangements in increasing order of the arguments. Prominent examples are the Ordered Weighted Operator and the Choquet and Sugeno integrals. Following a probability approach, ordering random variables by means of stochastic orders can be also a way to define aggregations of random variables. However, stochastic orders are not total orders, thus pairs of incomparable distributions can appear. This paper is focused on the definition of aggregations of random variables that take into account the stochastic ordination of the components of the input random vectors. Three alternatives are presented, the first one by using expected values and admissible permutations, then a modification for multivariate Gaussian random vectors and a third one that involves a transformation of the initial random vectors in new ones whose components are ordered with respect to the usual stochastic order. A deep theoretical study of the properties of all the proposals is made. A practical example regarding temperature prediction is provided [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. A three-way decision approach for dynamically expandable networks.
- Author
-
Wajid, Usman, Hamza, Muhammad, Khan, Muhammad Taimoor, and Azam, Nouman
- Subjects
- *
DEEP learning , *ROUGH sets - Abstract
Conventional deep learning models are designed to work on a single task. They are required to be trained from scratch each time new tasks are added. This leads to overhead in training time. Continual deep learning models with dynamically expandable network architecture aim to handle this issue. The key idea in these models is to find a balance between the properties of stability (preserving the learned information) and plasticity (updating and accommodating the new information) also sometimes referred to as the stability-plasticity dilemma. The stability and plasticity of the model critically depends on three-way division of nodes into freeze, partially regularize and duplicate nodes. Freezing more nodes result in high stability but typically low plasticity. On the other hand, duplicating more nodes result in high plasticity but may not have an effective stability. In this paper, we introduce an approach called three-way decisions based dynamically expandable networks or 3WDDEN and its memory-based version called 3WDDEN-replay. The proposed approaches use game-theoretic rough sets to determine effective thresholds for three-way division of nodes by considering a tradeoff game between stability and plasticity. Experimental results of 3WDDEN on MNIST variant datasets show an overall improvement of 3.8% in accuracy compared to standard dynamically expandable network approach or DEN. 3WDDEN-replay further adds to accuracy with additional memory cost. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. Graph representation learning method based on three-way partial order structure.
- Author
-
Yan, Enliang, Hao, Shikuan, Zhang, Tao, Hao, Tianyong, Chen, Qiliang, and Yu, Jianping
- Subjects
- *
REPRESENTATIONS of graphs , *KNOWLEDGE representation (Information theory) , *GRANULAR computing , *BIG data - Abstract
In the era of big data, handling massive datasets to extract valuable information has become increasingly critical. Knowledge representation emerges as a pivotal method to address this challenge. In the domain of knowledge representation, there exist two primary approaches: symbolic representation and vector representation. The integration of symbolic and vector representations to harness their respective strengths has become the cutting-edge approach to address challenges in the field of knowledge representation. This paper proposes a method that integrates a partial order formal structure analysis (POFSA) with graph representation learning. Specifically, we initially construct three-way partial order structure graphs, then create an attribute graph based on this structure, which can be processed by the graph representation learning methods. Finally, we utilize the graph representation learning to construct embeddings for three-way attribute partial order structure diagram (APOSD). We comprehensively assesse these embeddings across eight different datasets and present the results. The experiments indicate the feasibility of our proposed approach which is proven a novel approach of combining symbolic and vector representations for handling complex data and implicit knowledge. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. Confidence assessment in safety argument structure - Quantitative vs. qualitative approaches.
- Author
-
Idmessaoud, Yassir, Dubois, Didier, and Guiochet, Jérémie
- Subjects
- *
SAFETY standards , *DEMPSTER-Shafer theory , *ARGUMENT , *MATHEMATICAL formulas , *CONFIDENCE , *SYSTEM safety - Abstract
Some safety standards (e.g., ISO 26262 in automotive industry) propose the use of argument structures to justify that the high-level safety properties of a system have been ensured. The goal structuring notation (GSN) is a graphical tool used to represent these argument structures. However, this approach does not address the uncertainties that may affect the validity of the arguments. Thus, some authors proposed to complement GSN patterns with a quantitative confidence assessment procedure. In this paper, we first present a refined procedure that expresses the relation between premises (pieces of evidence) and the conclusion (top-goal to be demonstrated) using logical expressions. Then using Dempster-Shafer theory, we quantify uncertainty on each expression to build an explicit mathematical formula for propagating uncertainty to the conclusion. Inputs for the propagation model are collected from experts and transformed into numerical values using an improved elicitation model. Afterwards, we introduce a purely qualitative alternative to the quantitative procedure based on the theory of qualitative capacities. Finally, we adapt the propagation and elicitation models to this framework. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
38. A probabilistic analysis of selected notions of iterated conditioning under coherence.
- Author
-
Castronovo, Lydia and Sanfilippo, Giuseppe
- Subjects
- *
LOGIC , *PROBABILITY theory - Abstract
It is well known that basic conditionals satisfy some desirable basic logical and probabilistic properties, such as the compound probability theorem. However checking the validity of these becomes trickier when we switch to compound and iterated conditionals. Herein we consider de Finetti's notion of conditional both in terms of a three-valued object and as a conditional random quantity in the betting framework. We begin by recalling the notions of conjunction and disjunction among conditionals in selected trivalent logics. Then we analyze the notions of iterated conditioning in the frameworks of the specific three-valued logics introduced by Cooper-Calabrese, by de Finetti, and by Farrel. By computing some probability propagation rules we show that the compound probability theorem and other important properties are not always preserved by these formulations. Then, for each trivalent logic we introduce an iterated conditional as a suitable random quantity which satisfies the compound prevision theorem as well as some other desirable properties. We also check the validity of two generalized versions of Bayes' Rule for iterated conditionals. We study the p-validity of generalized versions of Modus Ponens and two-premise centering for iterated conditionals. Finally, we observe that all the basic properties are satisfied within the framework of iterated conditioning followed in recent papers by Gilio and Sanfilippo in the setting of conditional random quantities. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. Data complexity: An FCA-based approach.
- Author
-
Buzmakov, Alexey, Dudyrev, Egor, Kuznetsov, Sergei O., Makhalova, Tatiana, and Napoli, Amedeo
- Subjects
- *
MEASUREMENT - Abstract
In this paper we propose different indices for measuring the complexity of a dataset in terms of Formal Concept Analysis (FCA). We extend the lines of the research about the "closure structure" and the "closure index" based on minimum generators of intents (aka closed itemsets). We would try to capture statistical properties of a dataset, not just extremal characteristics, such as the size of a passkey. For doing so we introduce an alternative approach where we measure the complexity of a dataset w.r.t. five significant elements that can be computed in a concept lattice, namely intents (closed sets of attributes), pseudo-intents, proper premises, keys (minimal generators), and passkeys (minimum generators). Then we define several original indices allowing us to estimate the complexity of a dataset. Moreover we study the distribution of all these different elements and indices in various real-world and synthetic datasets. Finally, we investigate the relations existing between these significant elements and indices, and as well the relations with implications and association rules. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
40. Polyadic relational concept analysis.
- Author
-
Bazin, Alexandre, Galasso, Jessie, and Kahn, Giacomo
- Subjects
- *
LATTICE theory , *MATHEMATICAL analysis - Abstract
Formal concept analysis is a mathematical framework based on lattice theory that aims at representing the information contained in binary object-attribute datasets (called formal contexts) in the form of a lattice of so-called formal concepts. Since its introduction, it has been extended to more complex types of data. In this paper, we are interested in two of those extensions: relational concept analysis and polyadic concept analysis that allow to process, respectively, relational data and n -ary relations. We present a framework for polyadic relational concept analysis that extends relational concept analysis to relational datasets that are made of n -ary relations. We show its basic properties and that it is a valid extension of relational concept analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Construction methods of fuzzy implications on bounded posets.
- Author
-
Wang, Mei, Zhang, Xiaohong, Bustince, Humberto, and Fernandez, Javier
- Subjects
- *
HOMOMORPHISMS , *PARTIALLY ordered sets - Abstract
The fuzzy implication on bounded lattices was introduced by Palmeira et al., and the method of extending fuzzy implications on bounded lattices by using retraction was provided. However, we find that the extension of fuzzy implications on bounded lattices can also be realized through homomorphism. In order to get better results, we will continue to study this topic in this paper. In particular, we will focus on the construction methods of fuzzy implications on bounded posets. More precisely, we will give some construction methods of fuzzy implications via 0 , 1 -homomorphism on bounded posets. Then we further study two special kinds of fuzzy implications, (Q , N) -implications and R Q -implications on bounded posets, where Q is a quasi-overlap function. Finally, we discuss the distributive laws and the importation laws of (Q , N) -implications and R Q -implications over a quasi-overlap function Q. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. A direct approach to representing algebraic domains by formal contexts.
- Author
-
Zhou, Xiangnan, Wang, Longchun, and Li, Qingguo
- Subjects
- *
LANGUAGE & languages - Abstract
This paper is to establish closer links between domain theory and Formal Concept Analysis (FCA). We propose the notion of an optimised concept for a formal context, which has some properties similar to an intent. With the tool of optimised concepts, we show that the class of formal contexts has directly corresponded with algebraic domains. Meanwhile, two subclasses of formal contexts are identified to characterize algebraic L-domains and Scott domains. As an application, we resolve the open problem of how to reconstruct bounded complete continuous domains in the languages of attribute continuous contexts. Finally, we extend our presentation of algebraic domains to a categorical equivalence. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
43. A preferential interpretation of MultiLayer Perceptrons in a conditional logic with typicality.
- Author
-
Alviano, Mario, Bartoli, Francesco, Botta, Marco, Esposito, Roberto, Giordano, Laura, and Theseider Dupré, Daniele
- Subjects
- *
MULTILAYER perceptrons , *CONDITIONALS (Logic) , *DESCRIPTION logics , *KNOWLEDGE base , *KNOWLEDGE representation (Information theory) , *MANY-valued logic - Abstract
In this paper we investigate the relationships between a multipreferential semantics for defeasible reasoning in knowledge representation and a multilayer neural network model. Weighted knowledge bases for a simple description logic with typicality are considered under a (many-valued) "concept-wise" multipreference semantics. The semantics is used to provide a preferential interpretation of MultiLayer Perceptrons (MLPs). A model checking and an entailment based approach are exploited in the verification of conditional properties of MLPs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
44. An efficient method of renewing object-induced three-way concept lattices involving decreasing attribute-granularity levels.
- Author
-
Xie, Junping, Yang, Jing, Li, Jinhai, and Wang, Debby D.
- Subjects
- *
ALGORITHMS - Abstract
In three-way concept analysis, changing (decreasing or increasing) attribute-granularity levels is needed to seek desirable information. Reconstructing three-way concept lattices often requires huge computation and long elapsed time when attribute-granularity levels are changed. To avoid this problem, a good strategy is indirectly renewing three-way concept lattices. Our paper studies how to renew object-induced three-way concept lattices involving decreasing attribute-granularity levels. Firstly, we analyze changes of object-induced three-way concept lattices when attribute-granularity levels are decreased. To classify changes of object-induced three-way concepts, we classify these concepts into six categories, derive sufficient and necessary conditions of identifying these categories, and investigate their properties. To explore changes of covering relations among object-induced three-way concepts, we classify covering relations into three categories, and identify them by finding which are the destructors of deleted object-induced three-way concepts before the decrease, and analyzing which are children concepts of object-induced three-way concepts as destructors after the decrease. Secondly, by using the above analysis results, we put forward a novel algorithm called OEL-Collapse to renew object-induced three-way concept lattices when attribute-granularity levels are decreased. Finally, experiments are conducted to illustrate the efficiency of the OEL-Collapse algorithm. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
45. Some general fusion and transformation frames for merging basic uncertain information.
- Author
-
Jin, LeSheng, Yager, Ronald R., Mesiar, Radko, and Chen, Zhen-Song
- Subjects
- *
MATHEMATICAL analysis , *AGGREGATION operators , *CERTAINTY - Abstract
The Basic Uncertain Information (BUI) is a recently introduced type of uncertain data that has rapidly undergone development and practical application. The existing aggregation operators designed for BUI solely encompass the weighted mean and Choquet integral. The present study puts forth a set of general information fusion frameworks and methodologies aimed at gathering BUI granules. The first mode yields BUI granules as its output, whereas the subsequent two modes generate outputs in the form of interval values. The paper includes numerical examples and applications that correspond to the presented findings. The present study conducts an analysis of various mathematical properties pertaining to the three BUI fusion modes that have been proposed. These properties include idempotency, monotonicities, certainty derived inclusion, certainty monotonicity, homogeneities, non-symmetricity, comonotone additivities, and continuities. The proposals and analyses presented in this work are of a general nature and have the potential to inspire various practical specifications. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
46. A novel information fusion method using improved entropy measure in multi-source incomplete interval-valued datasets.
- Author
-
Xu, Weihua, Cai, Ke, and Wang, Debby D.
- Subjects
- *
DISTRIBUTION (Probability theory) , *ENTROPY , *ENTROPY (Information theory) , *DATA mining , *MAXIMUM entropy method , *INFORMATION resources , *INFORMATION design - Abstract
Multi-source data is a comprehensive data type that combines multiple sources of information or datasets. Compared to point-valued data, interval-valued data provides a more accurate representation of the uncertainty and variability associated with objects. In practical situations, data obtained from multiple sources may contain missing values for various reasons. Therefore, it is essential to develop multi-source information fusion technology in order to achieve information fusion or information extraction from multi-source incomplete data. This paper aims to explore the information fusion problem of multi-source incomplete interval-valued datasets. The primary contributions of this study involve utilizing the principle of statistical distribution and KL divergence to establish a metric for measuring the similarity between intervals. Firstly, this approach helps to reduce the problem of disregarding internal information within interval values, which can result in the loss of valuable information. Secondly, we establish an interval fuzzy similarity relation based on the mentioned concept of similarity among interval values. Moreover, we investigate the uncertainty measurement of incomplete interval-valued decision datasets and design an emerging information entropy fusion method. Finally, we comprehensively evaluate the effectiveness of the proposed method. Experimental results indicate that the proposed approach has advantage over the maximum, minimum, mean, and information entropy fusion method based on tolerance relationship. In addition, the distance metric used in this article can improve the fusion classification effect compared to several common interval-valued distance measures. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
47. Changing behaviour under unfairness: An evolutionary model of the Ultimatum Game.
- Author
-
Arioli, Gianni, Lucchetti, Roberto, and Valente, Giovanni
- Subjects
- *
EVOLUTIONARY models , *GAMES , *DEMOGRAPHIC change - Abstract
Experimental results on the Ultimatum Game indicate that receivers may reject non-zero offers, even though that seems irrational. The explanation is that, when players are treated unfairly, they can act against strict rationality. This paper discusses an evolutionary model of the Ultimatum Game describing how populations of players change their behaviour in time. We prove an analytical result that establishes under what conditions receivers tend to reject unfair offers. The response to unfair offers is also shown to be sensitive to different degrees of unfairness. We then introduce a Bayesian game to translate our result from populations to individual players. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
48. How to choose a completion method for pairwise comparison matrices with missing entries: An axiomatic result.
- Author
-
Csató, László
- Subjects
- *
DIRECTED acyclic graphs , *DIRECTED graphs - Abstract
Since there exist several completion methods to estimate the missing entries of pairwise comparison matrices, practitioners face a difficult task in choosing the best technique. Our paper contributes to this issue: we consider a special set of incomplete pairwise comparison matrices that can be represented by a weakly connected directed acyclic graph, and study whether the derived weights are consistent with the partial order implied by the underlying graph. According to previous results from the literature, two popular procedures, the incomplete eigenvector and the incomplete logarithmic least squares methods fail to satisfy the required property. Here, the recently introduced lexicographically optimal completion combined with any of these weighting methods is shown to avoid ordinal violation in the above setting. Our finding provides a powerful argument for using the lexicographically optimal completion to determine the missing elements in an incomplete pairwise comparison matrix. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
49. GFDC: A granule fusion density-based clustering with evidential reasoning.
- Author
-
Cai, Mingjie, Wu, Zhishan, Li, Qingguo, Xu, Feng, and Zhou, Jie
- Subjects
- *
DEMPSTER-Shafer theory , *DRILL core analysis , *GRANULATION , *GRANULAR computing - Abstract
Density-based clustering algorithms are known for their ability to detect irregular clusters, but they have limitations when it comes to dealing with clusters of varying densities. In this paper, we propose a new clustering algorithm called granule fusion density-based clustering with evidential reasoning (GFDC). The approach introduces the concept of sparse degree, which measures both the local density and global density of samples. The sparse degree of samples reflects the stability of samples. Moreover, a core-granule is composed of the neighborhood granule of a sample, of which the sparse degree is minimum in its neighborhood. Then, the core-granules are generated based on the sparse degree and are insensitive to clusters with varying densities. The core samples, which consist of samples in core-granules, are used to form initial clusters through fusion strategies. Additionally, an assignment method is developed from Dempster-Shafer theory to assign border samples and identify outliers. The experimental results demonstrate the effectiveness of GFDC on extensive synthetic and real-world datasets. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
50. Change in quantitative bipolar argumentation: Sufficient, necessary, and counterfactual explanations.
- Author
-
Kampik, Timotheus, Čyras, Kristijonas, and Ruiz Alarcón, José
- Subjects
- *
EXPLANATION , *ARGUMENT , *SEMANTICS , *COUNTERFACTUALS (Logic) , *ARTIFICIAL intelligence - Abstract
This paper presents a formal approach to explaining change of inference in Quantitative Bipolar Argumentation Frameworks (QBAFs). When drawing conclusions from a QBAF and updating the QBAF to then again draw conclusions (and so on), our approach traces changes – which we call strength inconsistencies – in the partial order over argument strengths that a semantics establishes on some arguments of interest, called topic arguments. We trace the causes of strength inconsistencies to specific arguments, which then serve as explanations. We identify sufficient, necessary, and counterfactual explanations for strength inconsistencies and show that strength inconsistency explanations exist if and only if an update leads to strength inconsistency. We define a heuristic-based approach to facilitate the search for strength inconsistency explanations, for which we also provide an implementation. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.