1,622 results on '"structure learning"'
Search Results
2. Using GPT-4 to guide causal machine learning
- Author
-
Constantinou, Anthony C., Kitson, Neville K., and Zanga, Alessio
- Published
- 2025
- Full Text
- View/download PDF
3. Supervised structure learning
- Author
-
Friston, Karl J., Da Costa, Lancelot, Tschantz, Alexander, Kiefer, Alex, Salvatori, Tommaso, Neacsu, Victorita, Koudahl, Magnus, Heins, Conor, Sajid, Noor, Markovic, Dimitrije, Parr, Thomas, Verbelen, Tim, and Buckley, Christopher L.
- Published
- 2024
- Full Text
- View/download PDF
4. The neuroscience of active learning and direct instruction
- Author
-
Dubinsky, Janet M. and Hamid, Arif A.
- Published
- 2024
- Full Text
- View/download PDF
5. Online Structure Learning with Dirichlet Processes Through Message Passing
- Author
-
van Erp, Bart, Nuijten, Wouter W. L., de Vries, Bert, Li, Gang, Series Editor, Filipe, Joaquim, Series Editor, Xu, Zhiwei, Series Editor, Buckley, Christopher L., editor, Cialfi, Daniela, editor, Lanillos, Pablo, editor, Pitliya, Riddhi J., editor, Sajid, Noor, editor, Shimazaki, Hideaki, editor, Verbelen, Tim, editor, and Wisse, Martijn, editor
- Published
- 2025
- Full Text
- View/download PDF
6. Exploring and Learning Structure: Active Inference Approach in Navigational Agents
- Author
-
de Tinguy, Daria, Verbelen, Tim, Dhoedt, Bart, Li, Gang, Series Editor, Filipe, Joaquim, Series Editor, Xu, Zhiwei, Series Editor, Buckley, Christopher L., editor, Cialfi, Daniela, editor, Lanillos, Pablo, editor, Pitliya, Riddhi J., editor, Sajid, Noor, editor, Shimazaki, Hideaki, editor, Verbelen, Tim, editor, and Wisse, Martijn, editor
- Published
- 2025
- Full Text
- View/download PDF
7. Efficient Nonlinear DAG Learning Under Projection Framework
- Author
-
Yin, Naiyu, Yu, Yue, Gao, Tian, Ji, Qiang, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Antonacopoulos, Apostolos, editor, Chaudhuri, Subhasis, editor, Chellappa, Rama, editor, Liu, Cheng-Lin, editor, Bhattacharya, Saumik, editor, and Pal, Umapada, editor
- Published
- 2025
- Full Text
- View/download PDF
8. High order expression dependencies finely resolve cryptic states and subtypes in single cell data.
- Author
-
Jansma, Abel, Yao, Yuelin, Wolfe, Jareth, Del Debbio, Luigi, Beentjes, Sjoerd V, Ponting, Chris P, and Khamseh, Ava
- Subjects
- *
GENE expression , *LIFE sciences , *PROXIMITY spaces , *CELL cycle , *LIVER cancer - Abstract
Single cells are typically typed by clustering into discrete locations in reduced dimensional transcriptome space. Here we introduce Stator, a data-driven method that identifies cell (sub)types and states without relying on cells' local proximity in transcriptome space. Stator labels the same single cell multiply, not just by type and subtype, but also by state such as activation, maturity or cell cycle sub-phase, through deriving higher-order gene expression dependencies from a sparse gene-by-cell expression matrix. Stator's finer resolution is clear from analyses of mouse embryonic brain, and human healthy or diseased liver. Rather than only coarse-scale labels of cell type, Stator further resolves cell types into subtypes, and these subtypes into stages of maturity and/or cell cycle phases, and yet further into portions of these phases. Among cryptically homogeneous embryonic cells, for example, Stator finds 34 distinct radial glia states whose gene expression forecasts their future GABAergic or glutamatergic neuronal fate. Further, Stator's fine resolution of liver cancer states reveals expression programmes that predict patient survival. We provide Stator as a Nextflow pipeline and Shiny App. Synopsis: Stator assigns multiple states and (sub)types to a single cell using coordinated gene expression and/or non-expression in a data-driven manner. Its application of structure learning has increased power for quantifying higher-order expression dependencies. Stator does not require PCA or UMAP dimensionality reduction, and its states are not based on cell proximity in expression space. Cells can be assigned by Stator simultaneously to cell type, subtype and state (e.g., activation, maturity and cell cycle sub-phase). Stator states are inferred with uncertainty quantification and are reproducible in disjoint data sets. Stator assigns multiple states and (sub)types to a single cell using coordinated gene expression and/or non-expression in a data-driven manner. Its application of structure learning has increased power for quantifying higher-order expression dependencies. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
9. Causal analysis for multivariate integrated clinical and environmental exposures data.
- Author
-
Sinha, Meghamala, Haaland, Perry, Krishnamurthy, Ashok, Lan, Bo, Ramsey, Stephen A., Schmitt, Patrick L., Sharma, Priya, Xu, Hao, and Fecho, Karamarie
- Subjects
- *
DISTRIBUTION (Probability theory) , *ELECTRONIC health records , *DATA structures , *ASTHMATICS , *CAUSAL inference - Abstract
Electronic health records (EHRs) provide a rich source of observational patient data that can be explored to infer underlying causal relationships. These causal relationships can be applied to augment medical decision-making or suggest hypotheses for healthcare research. In this study, we explored a large-scale EHR dataset on patients with asthma or related conditions (N = 14,937). The dataset included integrated data on features representing demographic factors, clinical measures, and environmental exposures. The data were accessed via a service named the Integrated Clinical and Environmental Service (ICEES). We estimated underlying causal relationships from the data to identify significant predictors of asthma attacks. We also performed simulated interventions on the inferred causal network to detect the causal effects, in terms of shifts in probability distribution for asthma attacks. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
10. Dynamic evolution of causal relationships among cryptocurrencies: an analysis via Bayesian networks: Dynamic evolution of causal relationships...: R. Amirzadeh et al.
- Author
-
Amirzadeh, Rasoul, Thiruvady, Dhananjay, Nazari, Asef, and Ee, Mong Shan
- Subjects
BAYESIAN analysis ,CRYPTOCURRENCIES ,DECISION making in investments ,PRICE fluctuations ,BITCOIN - Abstract
Understanding the relationships between cryptocurrencies is important for making informed investment decisions in this financial market. Our study utilises Bayesian networks to examine the causal interrelationships among six major cryptocurrencies: Bitcoin, Binance Coin, Ethereum, Litecoin, Ripple, and Tether. Beyond understanding the connectedness, we also investigate whether these relationships evolve over time. This understanding is crucial for developing profitable investment strategies and forecasting methods. Therefore, we introduce an approach to investigate the dynamic nature of these relationships. Our observations reveal that Tether, a stablecoin, behaves distinctly compared to mining-based cryptocurrencies and stands isolated from the others. Furthermore, our findings indicate that Bitcoin and Ethereum significantly influence the price fluctuations of the other coins, except for Tether. This highlights their key roles in the cryptocurrency ecosystem. Additionally, we conduct diagnostic analyses on constructed Bayesian networks, emphasising that cryptocurrencies generally follow the same market direction as extra evidence for interconnectedness. Moreover, our approach reveals the dynamic and evolving nature of these relationships over time, offering insights into the ever-changing dynamics of the cryptocurrency market. [ABSTRACT FROM AUTHOR]
- Published
- 2025
- Full Text
- View/download PDF
11. Learning dynamic cognitive map with autonomous navigation.
- Author
-
de Tinguy, Daria, Verbelen, Tim, and Dhoedt, Bart
- Subjects
COGNITIVE maps (Psychology) ,NAUTICAL charts ,ANIMAL navigation ,COGNITIVE learning ,LEARNING ability - Abstract
Inspired by animal navigation strategies, we introduce a novel computational model to navigate and map a space rooted in biologically inspired principles. Animals exhibit extraordinary navigation prowess, harnessing memory, imagination, and strategic decision-making to traverse complex and aliased environments adeptly. Our model aims to replicate these capabilities by incorporating a dynamically expanding cognitive map over predicted poses within an active inference framework, enhancing our agent's generative model plasticity to novelty and environmental changes. Through structure learning and active inference navigation, our model demonstrates efficient exploration and exploitation, dynamically expanding its model capacity in response to anticipated novel un-visited locations and updating the map given new evidence contradicting previous beliefs. Comparative analyses in mini-grid environments with the clone-structured cognitive graph model (CSCG), which shares similar objectives, highlight our model's ability to rapidly learn environmental structures within a single episode, with minimal navigation overlap. Our model achieves this without prior knowledge of observation and world dimensions, underscoring its robustness and efficacy in navigating intricate environments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
12. Zero-shot counting with a dual-stream neural network model.
- Author
-
Thompson, Jessica A.F., Sheahan, Hannah, Dumbalska, Tsvetomira, Sandbrink, Julian D., Piazza, Manuela, and Summerfield, Christopher
- Subjects
- *
ARTIFICIAL neural networks , *PARIETAL lobe , *VISUAL pathways , *VISUAL learning , *VISUAL cortex - Abstract
To understand a visual scene, observers need to both recognize objects and encode relational structure. For example, a scene comprising three apples requires the observer to encode concepts of "apple" and "three." In the primate brain, these functions rely on dual (ventral and dorsal) processing streams. Object recognition in primates has been successfully modeled with deep neural networks, but how scene structure (including numerosity) is encoded remains poorly understood. Here, we built a deep learning model, based on the dual-stream architecture of the primate brain, which is able to count items "zero-shot"—even if the objects themselves are unfamiliar. Our dual-stream network forms spatial response fields and lognormal number codes that resemble those observed in the macaque posterior parietal cortex. The dual-stream network also makes successful predictions about human counting behavior. Our results provide evidence for an enactive theory of the role of the posterior parietal cortex in visual scene understanding. • We describe a dual-stream neural network model that displays zero-shot counting • With ablations, we show how our dual-stream architecture supports this ability • The model replicates several aspects of human counting behavior and development • The learned representations mimic properties of neural codes for number and space How does the brain represent the structure of a visual scene (the relations among items, e.g., the cardinality) independent of scene contents (the objects in the scene, e.g., item identity)? Thompson et al. propose a dual-stream neural network model based on the parallel pathways of the primate visual system. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
13. 基于评分缓存的节点序空间下BN 结构学习.
- Author
-
高晓光, 闫栩辰, 王紫东, 刘晓寒, and 冯奇
- Subjects
SEARCH algorithms ,BAYESIAN analysis ,MACHINE learning ,NEIGHBORHOODS ,ITERATIVE learning control - Abstract
Copyright of Systems Engineering & Electronics is the property of Journal of Systems Engineering & Electronics Editorial Department and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
14. Bayesian network structure learning by dynamic programming algorithm based on node block sequence constraints.
- Author
-
He, Chuchao, Di, Ruohai, Li, Bo, and Neretin, Evgeny
- Subjects
BAYESIAN analysis ,MACHINE learning ,DYNAMIC programming ,ORDER picking systems ,ALGORITHMS - Abstract
The use of dynamic programming (DP) algorithms to learn Bayesian network structures is limited by their high space complexity and difficulty in learning the structure of large‐scale networks. Therefore, this study proposes a DP algorithm based on node block sequence constraints. The proposed algorithm constrains the traversal process of the parent graph by using the M‐sequence matrix to considerably reduce the time consumption and space complexity by pruning the traversal process of the order graph using the node block sequence. Experimental results show that compared with existing DP algorithms, the proposed algorithm can obtain learning results more efficiently with less than 1% loss of accuracy, and can be used for learning larger‐scale networks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
15. Bayesian network structure learning by dynamic programming algorithm based on node block sequence constraints
- Author
-
Chuchao He, Ruohai Di, Bo Li, and Evgeny Neretin
- Subjects
Bayesian network (BN) ,dynamic programming (DP) ,node block sequence ,strongly connected component (SCC) ,structure learning ,Computational linguistics. Natural language processing ,P98-98.5 ,Computer software ,QA76.75-76.765 - Abstract
Abstract The use of dynamic programming (DP) algorithms to learn Bayesian network structures is limited by their high space complexity and difficulty in learning the structure of large‐scale networks. Therefore, this study proposes a DP algorithm based on node block sequence constraints. The proposed algorithm constrains the traversal process of the parent graph by using the M‐sequence matrix to considerably reduce the time consumption and space complexity by pruning the traversal process of the order graph using the node block sequence. Experimental results show that compared with existing DP algorithms, the proposed algorithm can obtain learning results more efficiently with less than 1% loss of accuracy, and can be used for learning larger‐scale networks.
- Published
- 2024
- Full Text
- View/download PDF
16. Processing Fluency and Predictive Processing: How the Predictive Mind Becomes Aware of its Cognitive Limitations.
- Author
-
Servajean, Philippe and Wiese, Wanja
- Subjects
- *
ANIMAL cognition , *SELECTIVITY (Psychology) , *BRAIN anatomy , *HEURISTIC , *HYPOTHESIS - Abstract
Predictive processing is an influential theoretical framework for understanding human and animal cognition. In the context of predictive processing, learning is often reduced to optimizing the parameters of a generative model with a predefined structure. This is known as
Bayesian parameter learning . However, to provide a comprehensive account of learning, one must also explain how the brain learns the structure of its generative model. This second kind of learning is known asstructure learning . Structure learning would involve true structural changes in generative models. The purpose of the current paper is to describe the processes involved upstream of these structural changes. To do this, we first highlight the remarkable compatibility between predictive processing and theprocessing fluency theory . More precisely, we argue that predictive processing is able to account for all the main theoretical constructs associated with the notion of processing fluency (i.e., the fluency heuristic, naïve theory, the discrepancy‐attribution hypothesis, absolute fluency, expected fluency, and relative fluency). We then use this predictive processing account of processing fluency to show how the brain could infer whether it needs a structural change for learning the causal regularities at play in the environment. Finally, we speculate on how this inference might indirectly trigger structural changes when necessary. [ABSTRACT FROM AUTHOR]- Published
- 2024
- Full Text
- View/download PDF
17. Using Bayesian Networks to Investigate Psychological Constructs: The Case of Empathy.
- Author
-
Briganti, Giovanni, Decety, Jean, Scutari, Marco, McNally, Richard J., and Linkowski, Paul
- Subjects
- *
INTERPERSONAL Reactivity Index , *BAYESIAN analysis , *RESEARCH questions , *PATHOLOGICAL psychology , *CAUSAL inference , *EMPATHY - Abstract
Network analysis is an emerging field for the study of psychopathology that considers constructs as arising from the interactions among their constituents. Pairwise effects among psychological components are often investigated by using this framework. Few studies have applied Bayesian networks, models that include directed interactions to perform causal inference on psychological constructs. Directed graphical models may be less straightforward to interpret in case the construct at hand does not contain symptoms but instead psychometric items from self-report measures. However, they may be useful in validating specific research questions that arise while using standard pairwise network models. In this study, we use Bayesian networks to investigate a well-known psychological construct, empathy from the Interpersonal Reactivity Index, in large two samples of 1973 university students from Belgium. Overall, our results support the hypotheses emphasizing empathic concern (i.e., sympathy) as causally important in the construct of empathy, and overall attribute the primacy of emotional components of empathy over their intellectual counterparts. Bayesian networks help researchers identify the plausible causal relationships in psychometric data, to gain new insight on the psychological construct under examination, help generate new hypotheses and provide evidence relevant to old ones. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
18. Bayesian network structure learning with a new ensemble weights and edge constraints setting mechanism.
- Author
-
Liu, Kaiyue, Zhou, Yun, and Huang, Hongbin
- Subjects
MACHINE learning ,DIRECTED acyclic graphs ,BAYESIAN analysis ,LEARNING ,INTERDISCIPLINARY education - Abstract
Bayesian networks (BNs) are highly effective in handling uncertain problems, which can assist in decision-making by reasoning with limited and incomplete information. Learning a faithful directed acyclic graph (DAG) from a large number of complex samples of a joint distribution is currently a challenging combinatorial problem. Due to the growing volume and complexity of data, some Bayesian structure learning algorithms are ineffective and lack the necessary precision to meet the required needs. In this paper, we propose a new PCCL-CC algorithm. To ensure the accuracy of the network structure, we introduce the new ensemble weights and edge constraints setting mechanism. In this mechanism, we employ a method that estimates the interaction between network nodes from multiple perspectives and divides the learning process into multiple stages. We utilize an asymmetric weighted ensemble method and adaptively adjust the network structure. Additionally, we propose a causal discovery method that effectively utilizes the causal relationships among data samples to correct the network structure and mitigate the influence of Markov equivalence classes (MEC). Experimental results on real datasets demonstrate that our approach outperforms state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
19. 基于改进萤火虫算法的贝叶斯网络结构学习.
- Author
-
宋楠, 邸若海, 王鹏, 李晓艳, 贺楚超, and 王储
- Abstract
Bayesian network is currently one of the most effective theoretical models in the field of uncertain knowledge expression and inference. Before utilizing Bayesian networks for analysis and inference, it is first necessary to obtain their network models through structural and parametric learning, and structure learning is the basis for parameter learning. Aiming at the existing firefly algorithm that does not conform to biological rules as well as learning the Bayesian network structure that has low efficiency and is easy to fall into local optimization, MGM-FA (firefly algorithm based on mutual information and gender mechanism) was designed. Firstly, the Bayesian network skeleton graph was obtained by calculating the mutual information of nodes, and the MGM-FA algorithm was driven to generate the initial population based on the skeleton graph. Secondly, a personalized Bayesian network population updating strategy based on the gender mechanism was introduced to safeguard the diversity of the Bayesian network individuals. Lastly, the local optimizer and perturbation operator were introduced to enhance the algorithm's ability of optimality seeking. Simulation experiments were carried out on standard networks of different sizes respectively, and the accuracy and efficiency of the algorithm are improved compared with existing algorithms of the same type. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
20. A Bayesian Approach for Learning Bayesian Network Structures.
- Author
-
Zareifard, Hamid, Rezaeitabar, Vahid, Javidian, Mohammad Ali, and Yozgatligil, Ceylan
- Abstract
We introduce a Bayesian approach method based on the Gibbs sampler for learning the Bayesian Network structure. For this, the existence and the direction of the edges are specified by a set of parameters. We use the non-informative discrete uniform prior to these parameters. In the Gibbs sampling, we sample from the full conditional distribution of these parameters, then a set of DAGs is obtained. For achieving a single graph that represents the best graph fitted on data, Monte Carlo Bayesian estimation of the probability of being the edge between nodes is calculated. The results on the benchmark Bayesian networks show that our method has higher accuracy compared to the state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
21. Learning dynamic cognitive map with autonomous navigation
- Author
-
Daria de Tinguy, Tim Verbelen, and Bart Dhoedt
- Subjects
autonomous navigation ,active inference ,cognitive map ,structure learning ,dynamic mapping ,knowledge learning ,Neurosciences. Biological psychiatry. Neuropsychiatry ,RC321-571 - Abstract
Inspired by animal navigation strategies, we introduce a novel computational model to navigate and map a space rooted in biologically inspired principles. Animals exhibit extraordinary navigation prowess, harnessing memory, imagination, and strategic decision-making to traverse complex and aliased environments adeptly. Our model aims to replicate these capabilities by incorporating a dynamically expanding cognitive map over predicted poses within an active inference framework, enhancing our agent's generative model plasticity to novelty and environmental changes. Through structure learning and active inference navigation, our model demonstrates efficient exploration and exploitation, dynamically expanding its model capacity in response to anticipated novel un-visited locations and updating the map given new evidence contradicting previous beliefs. Comparative analyses in mini-grid environments with the clone-structured cognitive graph model (CSCG), which shares similar objectives, highlight our model's ability to rapidly learn environmental structures within a single episode, with minimal navigation overlap. Our model achieves this without prior knowledge of observation and world dimensions, underscoring its robustness and efficacy in navigating intricate environments.
- Published
- 2024
- Full Text
- View/download PDF
22. Causal discovery from nonstationary time series
- Author
-
Sadeghi, Agathe, Gopal, Achintya, and Fesanghary, Mohammad
- Published
- 2025
- Full Text
- View/download PDF
23. A coevolutionary artificial bee colony for training feedforword neural networks
- Author
-
Zhang, Li, Li, Hong, and Gao, Weifeng
- Published
- 2024
- Full Text
- View/download PDF
24. Bayesian network structure learning with a new ensemble weights and edge constraints setting mechanism
- Author
-
Kaiyue Liu, Yun Zhou, and Hongbin Huang
- Subjects
Curriculum learning ,Integrated weight ,Structure learning ,Causal correction ,Robustness ,Electronic computers. Computer science ,QA75.5-76.95 ,Information technology ,T58.5-58.64 - Abstract
Abstract Bayesian networks (BNs) are highly effective in handling uncertain problems, which can assist in decision-making by reasoning with limited and incomplete information. Learning a faithful directed acyclic graph (DAG) from a large number of complex samples of a joint distribution is currently a challenging combinatorial problem. Due to the growing volume and complexity of data, some Bayesian structure learning algorithms are ineffective and lack the necessary precision to meet the required needs. In this paper, we propose a new PCCL-CC algorithm. To ensure the accuracy of the network structure, we introduce the new ensemble weights and edge constraints setting mechanism. In this mechanism, we employ a method that estimates the interaction between network nodes from multiple perspectives and divides the learning process into multiple stages. We utilize an asymmetric weighted ensemble method and adaptively adjust the network structure. Additionally, we propose a causal discovery method that effectively utilizes the causal relationships among data samples to correct the network structure and mitigate the influence of Markov equivalence classes (MEC). Experimental results on real datasets demonstrate that our approach outperforms state-of-the-art methods.
- Published
- 2024
- Full Text
- View/download PDF
25. Overharvesting in human patch foraging reflects rational structure learning and adaptive planning.
- Author
-
Harhen, Nora and Bornstein, Aaron
- Subjects
decision-making ,foraging ,reinforcement learning ,structure learning ,Choice Behavior ,Models ,Theoretical ,Learning ,Decision Making ,Environment ,Humans - Abstract
Patch foraging presents a sequential decision-making problem widely studied across organisms-stay with a current option or leave it in search of a better alternative? Behavioral ecology has identified an optimal strategy for these decisions, but, across species, foragers systematically deviate from it, staying too long with an option or overharvesting relative to this optimum. Despite the ubiquity of this behavior, the mechanism underlying it remains unclear and an object of extensive investigation. Here, we address this gap by approaching foraging as both a decision-making and learning problem. Specifically, we propose a model in which foragers 1) rationally infer the structure of their environment and 2) use their uncertainty over the inferred structure representation to adaptively discount future rewards. We find that overharvesting can emerge from this rational statistical inference and uncertainty adaptation process. In a patch-leaving task, we show that human participants adapt their foraging to the richness and dynamics of the environment in ways consistent with our model. These findings suggest that definitions of optimal foraging could be extended by considering how foragers reduce and adapt to uncertainty over representations of their environment.
- Published
- 2023
26. A Constrained Local Neighborhood Approach for Efficient Markov Blanket Discovery in Undirected Independent Graphs.
- Author
-
Liu, Kun, Li, Peiran, Zhang, Yu, Ren, Jia, Li, Ming, Wang, Xianyu, and Li, Cong
- Subjects
UNDIRECTED graphs ,BAYESIAN analysis ,CONSTRAINT algorithms ,COMPUTATIONAL complexity ,NEIGHBORHOODS - Abstract
When learning the structure of a Bayesian network, the search space expands significantly as the network size and the number of nodes increase, leading to a noticeable decrease in algorithm efficiency. Traditional constraint-based methods typically rely on the results of conditional independence tests. However, excessive reliance on these test results can lead to a series of problems, including increased computational complexity and inaccurate results, especially when dealing with large-scale networks where performance bottlenecks are particularly evident. To overcome these challenges, we propose a Markov blanket discovery algorithm based on constrained local neighborhoods for constructing undirected independence graphs. This method uses the Markov blanket discovery algorithm to refine the constraints in the initial search space, sets an appropriate constraint radius, thereby reducing the initial computational cost of the algorithm and effectively narrowing the initial solution range. Specifically, the method first determines the local neighborhood space to limit the search range, thereby reducing the number of possible graph structures that need to be considered. This process not only improves the accuracy of the search space constraints but also significantly reduces the number of conditional independence tests. By performing conditional independence tests within the local neighborhood of each node, the method avoids comprehensive tests across the entire network, greatly reducing computational complexity. At the same time, the setting of the constraint radius further improves computational efficiency while ensuring accuracy. Compared to other algorithms, this method can quickly and efficiently construct undirected independence graphs while maintaining high accuracy. Experimental simulation results show that, this method has significant advantages in obtaining the structure of undirected independence graphs, not only maintaining an accuracy of over 96% but also reducing the number of conditional independence tests by at least 50%. This significant performance improvement is due to the effective constraint on the search space and the fine control of computational costs. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
27. VertiBayes: learning Bayesian network parameters from vertically partitioned data with missing values.
- Author
-
van Daalen, Florian, Ippel, Lianne, Dekker, Andre, and Bermejo, Inigo
- Subjects
BAYESIAN analysis ,MISSING data (Statistics) ,FEDERATED learning ,EXPECTATION-maximization algorithms - Abstract
Federated learning makes it possible to train a machine learning model on decentralized data. Bayesian networks are widely used probabilistic graphical models. While some research has been published on the federated learning of Bayesian networks, publications on Bayesian networks in a vertically partitioned data setting are limited, with important omissions, such as handling missing data. We propose a novel method called VertiBayes to train Bayesian networks (structure and parameters) on vertically partitioned data, which can handle missing values as well as an arbitrary number of parties. For structure learning we adapted the K2 algorithm with a privacy-preserving scalar product protocol. For parameter learning, we use a two-step approach: first, we learn an intermediate model using maximum likelihood, treating missing values as a special value, then we train a model on synthetic data generated by the intermediate model using the EM algorithm. The privacy guarantees of VertiBayes are equivalent to those provided by the privacy preserving scalar product protocol used. We experimentally show VertiBayes produces models comparable to those learnt using traditional algorithms. Finally, we propose two alternative approaches to estimate the performance of the model using vertically partitioned data and we show in experiments that these give accurate estimates. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
28. Investigating consumers' intention of using contactless logistics technology in COVID-19 pandemic: a Copula-Bayesian Network approach.
- Author
-
Chen, Tianyi, Wong, Yiik Diew, Wang, Xueqin, and Li, Duowei
- Subjects
PAYMENT systems ,CONTACTLESS payment systems ,CONSUMER behavior ,COVID-19 pandemic ,THIRD-party logistics ,BUSINESS planning ,GENETIC algorithms - Abstract
Understanding consumers' intention of using contactless logistics technology is necessary for logistics service providers to make value-added business strategies in the COVID-19 pandemic. Previous studies have suggested that multiple factors shall be considered when investigating the intention, but few have attempted to comprehensively explain the causality of the intention. To bridge this gap, this study develops a Genetic Algorithm (GA)-based structure learning method and constructs a Copula-Bayesian Network (Copula-BN) upon questionnaire survey data to explore the causation of the intention. Based on the derived Copula-BN, we identify the key factors that contribute to the intention and reveal the causal relations along the main branch composed of those factors. Several findings provide theoretical and practical insights into the consumer-technology interaction under the pandemic context. Besides, this study demonstrates that the structure of the Copula-BN is rational and reasonable, which provides a solid basis for investigating the causality of the intention. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
29. The impact of variable ordering on Bayesian network structure learning.
- Author
-
Kitson, Neville K. and Constantinou, Anthony C.
- Subjects
MACHINE learning ,BAYESIAN analysis ,SAMPLE size (Statistics) ,ALGORITHMS - Abstract
Causal Bayesian Networks (CBNs) provide an important tool for reasoning under uncertainty with potential application to many complex causal systems. Structure learning algorithms that can tell us something about the causal structure of these systems are becoming increasingly important. In the literature, the validity of these algorithms is often tested for sensitivity over varying sample sizes, hyper-parameters, and occasionally objective functions, but the effect of the order in which the variables are read from data is rarely quantified. We show that many commonly-used algorithms, both established and state-of-the-art, are more sensitive to variable ordering than these other factors when learning CBNs from discrete variables. This effect is strongest in hill-climbing and its variants where we explain how it arises, but extends to hybrid, and to a lesser-extent, constraint-based algorithms. Because the variable ordering is arbitrary, any significant effect it has on learnt graph accuracy is concerning, and raises questions about the validity of both many older and more recent results produced by these algorithms in practical applications and their rankings in performance evaluations. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
30. Causal Structure Learning with Conditional and Unique Information Groups-Decomposition Inequalities.
- Author
-
Chicharro, Daniel and Nguyen, Julia K.
- Subjects
- *
DISTRIBUTION (Probability theory) , *ELECTRONIC data processing - Abstract
The causal structure of a system imposes constraints on the joint probability distribution of variables that can be generated by the system. Archetypal constraints consist of conditional independencies between variables. However, particularly in the presence of hidden variables, many causal structures are compatible with the same set of independencies inferred from the marginal distributions of observed variables. Additional constraints allow further testing for the compatibility of data with specific causal structures. An existing family of causally informative inequalities compares the information about a set of target variables contained in a collection of variables, with a sum of the information contained in different groups defined as subsets of that collection. While procedures to identify the form of these groups-decomposition inequalities have been previously derived, we substantially enlarge the applicability of the framework. We derive groups-decomposition inequalities subject to weaker independence conditions, with weaker requirements in the configuration of the groups, and additionally allowing for conditioning sets. Furthermore, we show how constraints with higher inferential power may be derived with collections that include hidden variables, and then converted into testable constraints using data processing inequalities. For this purpose, we apply the standard data processing inequality of conditional mutual information and derive an analogous property for a measure of conditional unique information recently introduced to separate redundant, synergistic, and unique contributions to the information that a set of variables has about a target. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
31. Algorithm 1045: A Covariate-Dependent Approach to Gaussian Graphical Modeling in R.
- Author
-
HELWIG, JACOB, DASGUPTA, SUTANOY, ZHAO, PENG, MALLICK, BANI K., and PATI, DEBDEEP
- Subjects
- *
ALGORITHMS , *CONTINUOUS functions , *INTEGRATED software , *C++ , *DATA modeling - Abstract
Graphical models are used to capture complex multivariate relationships and have applications in diverse disciplines such as biology, physics, and economics. Within this field, Gaussian graphical models aim to identify the pairs of variables whose dependence is maintained even after conditioning on the remaining variables in the data, known as the conditional dependence structure of the data. There are many existing software packages for Gaussian graphical modeling, however, they often make restrictive assumptions that reduce their flexibility for modeling data that are not identically distributed. Conversely, covdepGE is an R implementation of a variational weighted pseudo-likelihood algorithm for modeling the conditional dependence structure as a continuous function of an extraneous covariate. To build on the efficiency of this algorithm, covdepGE leverages parallelism and C++ integration with R. Additionally, covdepGE provides fully-automated and data-driven hyperparameter specification while maintaining flexibility for the user to decide key components of the estimation procedure. Through an extensive simulation study spanning diverse settings, covdepGE is demonstrated to be top of its class in recovering the ground truth conditional dependence structure while efficiently managing computational overhead. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
32. Graph Machine Learning for Fast Product Development from Formulation Trials
- Author
-
Dileo, Manuel, Olmeda, Raffaele, Pindaro, Margherita, Zignani, Matteo, Goos, Gerhard, Series Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bifet, Albert, editor, Krilavičius, Tomas, editor, Miliou, Ioanna, editor, and Nowaczyk, Slawomir, editor
- Published
- 2024
- Full Text
- View/download PDF
33. VertiBayes: learning Bayesian network parameters from vertically partitioned data with missing values
- Author
-
Florian van Daalen, Lianne Ippel, Andre Dekker, and Inigo Bermejo
- Subjects
Federated Learning ,Bayesian network ,Privacy preserving ,Vertically partitioned data ,Parameter learning ,Structure learning ,Electronic computers. Computer science ,QA75.5-76.95 ,Information technology ,T58.5-58.64 - Abstract
Abstract Federated learning makes it possible to train a machine learning model on decentralized data. Bayesian networks are widely used probabilistic graphical models. While some research has been published on the federated learning of Bayesian networks, publications on Bayesian networks in a vertically partitioned data setting are limited, with important omissions, such as handling missing data. We propose a novel method called VertiBayes to train Bayesian networks (structure and parameters) on vertically partitioned data, which can handle missing values as well as an arbitrary number of parties. For structure learning we adapted the K2 algorithm with a privacy-preserving scalar product protocol. For parameter learning, we use a two-step approach: first, we learn an intermediate model using maximum likelihood, treating missing values as a special value, then we train a model on synthetic data generated by the intermediate model using the EM algorithm. The privacy guarantees of VertiBayes are equivalent to those provided by the privacy preserving scalar product protocol used. We experimentally show VertiBayes produces models comparable to those learnt using traditional algorithms. Finally, we propose two alternative approaches to estimate the performance of the model using vertically partitioned data and we show in experiments that these give accurate estimates.
- Published
- 2024
- Full Text
- View/download PDF
34. 改进贝叶斯网络在变压器故障诊断中的应用.
- Author
-
仝兆景, 兰孟月, and 荆利菲
- Abstract
In view of the low accuracy of transformer fault diagnosis, a transformer fault diagnosis method based on ISMA(Improved Slime Mold optimization Algorithm) and optimized BN(Bayesian Network) is proposed. The hill-climbing algorithm searches the oriented maximum support tree to obtain the initial structure of the Bayesian network, that is, the initial population. The reverse learning strategy and SCA(Sine Cosine Algorithm) are introduced into the improved slime mold optimization algorithm to increase population diversity, update population location, and avoid the population falling into local optimal. The characteristics of transformer fault state are selected by the improved code-free ratio method, and the structure of Bayesian network is optimized by the improved slime mold optimization algorithm to improve the accuracy of transformer fault diagnosis based on Bayesian network. Different kinds of test functions are used to verify that the improved slime mold optimization algorithm has the excellent performance of fast convergence speed and high convergence accuracy. The simulation results show that the accuracy of the training set and the test set of the ISMA-BN diagnostic model is up to 98.2% and 97.14%, respectively, which has certain research value. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
35. Hidden Variable Discovery Based on Regression and Entropy.
- Author
-
Liao, Xingyu and Liu, Xiaoping
- Subjects
- *
MISSING data (Statistics) , *REGRESSION analysis , *ENTROPY - Abstract
Inferring causality from observed data is crucial in many scientific fields, but this process is often hindered by incomplete data. The incomplete data can lead to mistakes in understanding how variables affect each other, especially when some influencing factors are not directly observed. To tackle this problem, we've developed a new algorithm called Regression Loss-increased with Causal Intensity (RLCI). This approach uses regression and entropy analysis to uncover hidden variables. Through tests on various real-world datasets, RLCI has been proven to be effective. It can help spot hidden factors that may affect the relationship between variables and determine the direction of causal relationships. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
36. The Improved Ordering-Based Search Method Incorporating with Ensemble Learning.
- Author
-
Wang, Hao, Wang, Zidong, Zhong, Ruiguo, Liu, Xiaohan, and Gao, Xiaoguang
- Abstract
The Bayesian network provides a useful way to deal with uncertain information, which helps researchers to better understand the human cognitive process. The foundation of the Bayesian network focuses on identifying the qualitative relations between variables, which is also called structure learning. Local search in the ordering space is an effective method for learning the structure of large-scale Bayesian networks. However, the existing algorithms tend to the local optimum and stop searching for superior solutions. To tackle the problem, random perturbations are applied to the local optimum without specific strategies, resulting in many meaningless restarts that sacrifice much time but still fail to improve the results. As an extension of the local search, simulated annealing stochastically searches the solution spaces and selects relatively poor solutions with a certain probability. This paper proposes a method based on simulated annealing to learn Bayesian network structure in the ordering space, which expands the search scope by probabilistically accepting poorer solutions. Moreover, we improve simulated annealing by adding a memory module and modifying the termination condition. The memory module records the optimal solution before accepting a worse solution, which avoids losing the possible global optimal solution. The new termination condition is related to the quality of the search results, which reduces many redundant searches. Besides, we design a new restart strategy based on ensemble learning. When the search traps in the local optimum, a new ordering is obtained to restart the search by perturbing the current ordering with constraints. The constraints are generated by the results of ensemble learning on multiple structures, which help the algorithm approach the global optimum solution. Experimental results show that our proposed methods improve the accuracy and efficiency in learning the optimal structure over the benchmarks compared to the state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
37. 基于全流程并行遗传算法的贝叶斯网络结构学习.
- Author
-
蔡一鸣, 马力, 陆恒杨, and 方伟
- Abstract
Copyright of Systems Engineering & Electronics is the property of Journal of Systems Engineering & Electronics Editorial Department and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
- Published
- 2024
- Full Text
- View/download PDF
38. Optimization of Active Learning Strategies for Causal Network Structure.
- Author
-
Zhang, Mengxin and Zhang, Xiaojun
- Subjects
- *
ACTIVE learning , *DIRECTED graphs , *LEARNING strategies , *DIRECTED acyclic graphs , *COMPLETE graphs , *CAUSAL inference - Abstract
Causal structure learning is one of the major fields in causal inference. Only the Markov equivalence class (MEC) can be learned from observational data; to fully orient unoriented edges, experimental data need to be introduced from external intervention experiments to improve the identifiability of causal graphs. Finding suitable intervention targets is key to intervention experiments. We propose a causal structure active learning strategy based on graph structures. In the context of randomized experiments, the central nodes of the directed acyclic graph (DAG) are considered as the alternative intervention targets. In each stage of the experiment, we decompose the chain graph by removing the existing directed edges; then, each connected component is oriented separately through intervention experiments. Finally, all connected components are merged to obtain a complete causal graph. We compare our algorithm with previous work in terms of the number of intervention variables, convergence rate and model accuracy. The experimental results show that the performance of the proposed method in restoring the causal structure is comparable to that of previous works. The strategy of finding the optimal intervention target is simplified, which improves the speed of the algorithm while maintaining the accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
39. SLED: Structure Learning based Denoising for Recommendation.
- Author
-
SHENGYU ZHANG, TAN JIANG, KUN KUANG, FULI FENG, JIN YU, JIANXIN MA, ZHOU ZHAO, JIANKE ZHU, HONGXIA YANG, TAT-SENG CHUA, and FEI WU
- Abstract
The article introduces SLED, a Structure Learning based Denoising framework for recommendation systems. It addresses the challenge of denoising implicit feedback in recommender systems without relying on additional behavior signals. It is reported that SLED consists of two phases, center-aware graph structure learning and denoised recommendation.
- Published
- 2024
- Full Text
- View/download PDF
40. Bayesian learning of network structures from interventional experimental data.
- Author
-
Castelletti, F and Peluso, S
- Subjects
- *
DIRECTED graphs , *DIRECTED acyclic graphs , *BAYESIAN analysis , *MEASUREMENT errors , *MARKOV chain Monte Carlo , *SYNTHETIC proteins , *CAUSAL models - Abstract
Directed acyclic graphs provide an effective framework for learning causal relationships among variables given multivariate observations. Under pure observational data, directed acyclic graphs encoding the same conditional independencies cannot be distinguished and are collected into Markov equivalence classes. In many contexts, however, observational measurements are supplemented by interventional data that improve directed acyclic graph identifiability and enhance causal effect estimation. We propose a Bayesian framework for multivariate data partially generated after stochastic interventions. To this end, we introduce an effective prior elicitation procedure leading to a closed-form expression for the directed acyclic graph marginal likelihood and guaranteeing score equivalence among directed acyclic graphs that are Markov equivalent post intervention. Under the Gaussian setting, we show, in terms of posterior ratio consistency, that the true network will be asymptotically recovered, regardless of the specific distribution of the intervened variables and of the relative asymptotic dominance between observational and interventional measurements. We validate our theoretical results via simulation and we implement a Markov chain Monte Carlo sampler for posterior inference on the space of directed acyclic graphs on both synthetic and biological protein expression data. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
41. Bayesian network structure learning based on HC-PSO algorithm.
- Author
-
Gao, Wenlong, Zhi, Minqian, Ke, Yongsong, Wang, Xiaolong, Zhuo, Yun, Liu, Anping, and Yang, Yi
- Subjects
- *
BAYESIAN analysis , *PARTICLE swarm optimization , *MACHINE learning , *HAMMING distance , *ALGORITHMS , *HEURISTIC algorithms , *FUZZY graphs , *GENETIC algorithms - Abstract
Structure learning is the core of graph model Bayesian Network learning, and the current mainstream single search algorithm has problems such as poor learning effect, fuzzy initial network, and easy falling into local optimum. In this paper, we propose a heuristic learning algorithm HC-PSO combining the HC (Hill Climbing) algorithm and PSO (Particle Swarm Optimization) algorithm, which firstly uses HC algorithm to search for locally optimal network structures, takes these networks as the initial networks, then introduces mutation operator and crossover operator, and uses PSO algorithm for global search. Meanwhile, we use the DE (Differential Evolution) strategy to select the mutation operator and crossover operator. Finally, experiments are conducted in four different datasets to calculate BIC (Bayesian Information Criterion) and HD (Hamming Distance), and comparative analysis is made with other algorithms, the structure shows that the HC-PSO algorithm is superior in feasibility and accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
42. Hypothesis Tests for Structured Rank Correlation Matrices.
- Author
-
Perreault, Samuel, Nešlehová, Johanna G., and Duchesne, Thierry
- Subjects
- *
SEA level , *MATRICES (Mathematics) , *HYPOTHESIS , *MODEL validation - Abstract
Joint modeling of a large number of variables often requires dimension reduction strategies that lead to structural assumptions of the underlying correlation matrix, such as equal pair-wise correlations within subsets of variables. The underlying correlation matrix is thus of interest for both model specification and model validation. In this article, we develop tests of the hypothesis that the entries of the Kendall rank correlation matrix are linear combinations of a smaller number of parameters. The asymptotic behavior of the proposed test statistics is investigated both when the dimension is fixed and when it grows with the sample size. We pay special attention to the restricted hypothesis of partial exchangeability, which contains full exchangeability as a special case. We show that under partial exchangeability, the test statistics and their large-sample distributions simplify, which leads to computational advantages and better performance of the tests. We propose various scalable numerical strategies for implementation of the proposed procedures, investigate their behavior through simulations and power calculations under local alternatives, and demonstrate their use on a real dataset of mean sea levels at various geographical locations. for this article are available online. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
43. Causal Analysis of Physiological Sleep Data Using Granger Causality and Score-Based Structure Learning.
- Author
-
Thomas, Alex, Niranjan, Mahesan, and Legg, Julian
- Subjects
- *
SLEEP positions , *WAIST circumference , *OXYGEN in the blood , *SLEEP , *BAYESIAN analysis - Abstract
Understanding how the human body works during sleep and how this varies in the population is a task with significant implications for medicine. Polysomnographic studies, or sleep studies, are a common diagnostic method that produces a significant quantity of time-series sensor data. This study seeks to learn the causal structure from data from polysomnographic studies carried out on 600 adult volunteers in the United States. Two methods are used to learn the causal structure of these data: the well-established Granger causality and "DYNOTEARS", a modern approach that uses continuous optimisation to learn dynamic Bayesian networks (DBNs). The results from the two methods are then compared. Both methods produce graphs that have a number of similarities, including the mutual causation between electrooculogram (EOG) and electroencephelogram (EEG) signals and between sleeping position and SpO2 (blood oxygen level). However, DYNOTEARS, unlike Granger causality, frequently finds a causal link to sleeping position from the other variables. Following the creation of these causal graphs, the relationship between the discovered causal structure and the characteristics of the participants is explored. It is found that there is an association between the waist size of a participant and whether a causal link is found between the electrocardiogram (ECG) measurement and the EOG and EEG measurements. It is concluded that a person's body shape appears to impact the relationship between their heart and brain during sleep and that Granger causality and DYNOTEARS can produce differing results on real-world data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
44. Functional Bayesian networks for discovering causality from multivariate functional data.
- Author
-
Zhou, Fangting, He, Kejun, Wang, Kunbo, Xu, Yanxun, and Ni, Yang
- Subjects
- *
BAYESIAN analysis , *DIRECTED acyclic graphs , *GAUSSIAN processes - Abstract
Multivariate functional data arise in a wide range of applications. One fundamental task is to understand the causal relationships among these functional objects of interest. In this paper, we develop a novel Bayesian network (BN) model for multivariate functional data where conditional independencies and causal structure are encoded by a directed acyclic graph. Specifically, we allow the functional objects to deviate from Gaussian processes, which is the key to unique causal structure identification even when the functions are measured with noises. A fully Bayesian framework is designed to infer the functional BN model with natural uncertainty quantification through posterior summaries. Simulation studies and real data examples demonstrate the practical utility of the proposed model. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
45. Individualized causal discovery with latent trajectory embedded Bayesian networks.
- Author
-
Zhou, Fangting, He, Kejun, and Ni, Yang
- Subjects
- *
BAYESIAN analysis , *GENE regulatory networks , *LONG-Term Evolution (Telecommunications) , *DIRECTED acyclic graphs - Abstract
Bayesian networks have been widely used to generate causal hypotheses from multivariate data. Despite their popularity, the vast majority of existing causal discovery approaches make the strong assumption of a (partially) homogeneous sampling scheme. However, such assumption can be seriously violated, causing significant biases when the underlying population is inherently heterogeneous. To this end, we propose a novel causal Bayesian network model, termed BN‐LTE, that embeds heterogeneous samples onto a low‐dimensional manifold and builds Bayesian networks conditional on the embedding. This new framework allows for more precise network inference by improving the estimation resolution from the population level to the observation level. Moreover, while causal Bayesian networks are in general not identifiable with purely observational, cross‐sectional data due to Markov equivalence, with the blessing of causal effect heterogeneity, we prove that the proposed BN‐LTE is uniquely identifiable under relatively mild assumptions. Through extensive experiments, we demonstrate the superior performance of BN‐LTE in causal structure learning as well as inferring observation‐specific gene regulatory networks from observational data. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
46. ISHS-Net: Single-View 3D Reconstruction by Fusing Features of Image and Shape Hierarchical Structures.
- Author
-
Gao, Guoqing, Yang, Liang, Zhang, Quan, Wang, Chongmin, Bao, Hua, and Rao, Changhui
- Subjects
- *
COMPUTER graphics - Abstract
The reconstruction of 3D shapes from a single view has been a longstanding challenge. Previous methods have primarily focused on learning either geometric features that depict overall shape contours but are insufficient for occluded regions, local features that capture details but cannot represent the complete structure, or structural features that encode part relationships but require predefined semantics. However, the fusion of geometric, local, and structural features has been lacking, leading to inaccurate reconstruction of shapes with occlusions or novel compositions. To address this issue, we propose a two-stage approach for achieving 3D shape reconstruction. In the first stage, we encode the hierarchical structure features of the 3D shape using an encoder-decoder network. In the second stage, we enhance the hierarchical structure features by fusing them with global and point features and feed the enhanced features into a signed distance function (SDF) prediction network to obtain rough SDF values. Using the camera pose, we project arbitrary 3D points in space onto different depth feature maps of the CNN and obtain their corresponding positions. Then, we concatenate the features of these corresponding positions together to form local features. These local features are also fed into the SDF prediction network to obtain fine-grained SDF values. By fusing the two sets of SDF values, we improve the accuracy of the model and enable it to reconstruct other object types with higher quality. Comparative experiments demonstrate that the proposed method outperforms state-of-the-art approaches in terms of accuracy. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
47. Bi-objective evolutionary Bayesian network structure learning via skeleton constraint.
- Author
-
Wu, Ting, Qian, Hong, Liu, Ziqi, Zhou, Jun, and Zhou, Aimin
- Abstract
Bayesian network is a popular approach to uncertainty knowledge representation and reasoning. Structure learning is the first step to learn a Bayesian network. Score-based methods are one of the most popular ways of learning the structure. In most cases, the score of Bayesian network is defined as adding the log-likelihood score and complexity score by using the penalty function. If the penalty function is set unreasonably, it may hurt the performance of structure search. Thus, Bayesian network structure learning is essentially a bi-objective optimization problem. However, the existing bi-objective structure learning algorithms can only be applied to small-scale networks. To this end, this paper proposes a bi-objective evolutionary Bayesian network structure learning algorithm via skeleton constraint (BBS) for the medium-scale networks. To boost the performance of searching, BBS introduces the random order prior (ROP) initial operator. ROP generates a skeleton to constrain the searching space, which is the key to expanding the scale of structure learning problems. Then, the acyclic structures are guaranteed by adding the orders of variables in the initial skeleton. After that, BBS designs the Pareto rank based crossover and skeleton guided mutation operators. The operators operate on the skeleton obtained in ROP to make the search more targeted. Finally, BBS provides a strategy to choose the final solution. The experimental results show that BBS can always find the structure which is closer to the ground truth compared with the single-objective structure learning methods. Furthermore, compared with the existing bi-objective structure learning methods, BBS is scalable and can be applied to medium-scale Bayesian network datasets. On the educational problem of discovering the influencing factors of students’ academic performance, BBS provides higher quality solutions and is featured with the flexibility of solution selection compared with the widely-used Bayesian network structure learning methods. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
48. What Do Counterfactuals Say About the World? Reconstructing Probabilistic Logic Programs from Answers to 'What If?' Queries
- Author
-
Rückschloß, Kilian, Weitkämper, Felix, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bellodi, Elena, editor, Lisi, Francesca Alessandra, editor, and Zese, Riccardo, editor
- Published
- 2023
- Full Text
- View/download PDF
49. csl-MTFL: Multi-task Feature Learning with Joint Correlation Structure Learning for Alzheimer’s Disease Cognitive Performance Prediction
- Author
-
Liang, Wei, Zhang, Kai, Cao, Peng, Liu, Xiaoli, Yang, Jinzhu, Zaiane, Osmar R., Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Yang, Xiaochun, editor, Suhartanto, Heru, editor, Wang, Guoren, editor, Wang, Bin, editor, Jiang, Jing, editor, Li, Bing, editor, Zhu, Huaijie, editor, and Cui, Ningning, editor
- Published
- 2023
- Full Text
- View/download PDF
50. RBNets: A Reinforcement Learning Approach for Learning Bayesian Network Structure
- Author
-
Zheng, Zuowu, Wang, Chao, Gao, Xiaofeng, Chen, Guihai, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Koutra, Danai, editor, Plant, Claudia, editor, Gomez Rodriguez, Manuel, editor, Baralis, Elena, editor, and Bonchi, Francesco, editor
- Published
- 2023
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.