1,741 results on '"Statistical relational learning"'
Search Results
2. Combining Word Embeddings-Based Similarity Measures for Transfer Learning Across Relational Domains
- Author
-
Luca, Thais, Paes, Aline, Zaverucha, Gerson, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Muggleton, Stephen H., editor, and Tamaddoni-Nezhad, Alireza, editor
- Published
- 2024
- Full Text
- View/download PDF
3. Word embeddings-based transfer learning for boosted relational dependency networks.
- Author
-
Luca, Thais, Paes, Aline, and Zaverucha, Gerson
- Subjects
MACHINE learning ,STATISTICAL learning ,VECTOR spaces ,VECTOR data - Abstract
Conventional machine learning methods assume data to be independent and identically distributed (i.i.d.) and ignore the relational structure of the data, which contains crucial information about how objects participate in relationships and events. Statistical Relational Learning (SRL) combines elements from statistical and probabilistic modeling to relational learning to represent and learn in domains with complex relational and rich probabilistic structures. SRL models do not suppose data to be i.i.d., but, as conventional machine learning models, they also assume training and testing data are sampled from the same distribution. Transfer learning has emerged as an essential technique to handle scenarios where such an assumption does not hold. It aims to provide methods with the ability to recognize knowledge previously learned in a source domain and apply it to a new model in a target domain to start solving a new task. For SRL models, the primary challenge is to transfer the learned structure, mapping the vocabulary across different domains. In this work, we propose TransBoostler, an algorithm that uses pre-trained word embeddings to guide the mapping. We follow the assumption that the name of a predicate has a semantic connotation that can be mapped to a vector space model. Next, TransBoostler employs theory revision to adapt the mapped model to the target data. Experimental results showed that TransBoostler successfully transferred trees across different domains. It performs equally well as, or better than, previous systems and requires less training time for some investigated scenarios. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
4. Select First, Transfer Later: Choosing Proper Datasets for Statistical Relational Transfer Learning
- Author
-
Luca, Thais, Paes, Aline, Zaverucha, Gerson, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bellodi, Elena, editor, Lisi, Francesca Alessandra, editor, and Zese, Riccardo, editor
- Published
- 2023
- Full Text
- View/download PDF
5. Statistical Relational Structure Learning with Scaled Weight Parameters
- Author
-
Weitkämper, Felix, Ravdin, Dmitriy, Fabry, Ramona, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bellodi, Elena, editor, Lisi, Francesca Alessandra, editor, and Zese, Riccardo, editor
- Published
- 2023
- Full Text
- View/download PDF
6. Statistical Relational Extension of Answer Set Programming
- Author
-
Lee, Joohyung, Yang, Zhun, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Bertossi, Leopoldo, editor, and Xiao, Guohui, editor
- Published
- 2023
- Full Text
- View/download PDF
7. Logic + probabilistic programming + causal laws
- Author
-
Vaishak Belle
- Subjects
statistical relational learning ,first-order logic ,probabilistic programming ,Science - Abstract
Probabilistic planning attempts to incorporate stochastic models directly into the planning process, which is the problem of synthesizing a sequence of actions that achieves some objective for a putative agent. Probabilistic programming has rapidly emerged as a key paradigm to integrate probabilistic concepts with programming languages, which allows one to specify complex probabilistic models using programming primitives like recursion and loops. Probabilistic logic programming aims to further ease the specification of structured probability distributions using first-order logical artefacts. In this article, we briefly discuss the modelling of probabilistic planning through the lens of probabilistic (logic) programming. Although many flavours for such an integration are possible, we focus on two representative examples. The first is an extension to the popular probabilistic logic programming language PROBLOG, which permits the decoration of probabilities on Horn clauses—that is, prolog programs. The second is an extension to the popular agent programming language GOLOG, which permits the logical specification of dynamical systems via actions, effects and observations. The probabilistic extensions thereof emphasize different strengths of probabilistic programming that are particularly useful for non-trivial modelling issues raised in probabilistic planning. Among other things, one can instantiate planning problems with growing and shrinking state spaces, discrete and continuous probability distributions, and non-unique prior distributions in a first-order setting.
- Published
- 2023
- Full Text
- View/download PDF
8. Non-parametric Learning of Embeddings for Relational Data Using Gaifman Locality Theorem
- Author
-
Dhami, Devendra Singh, Yan, Siwen, Kunapuli, Gautam, Natarajan, Sriraam, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Katzouris, Nikos, editor, and Artikis, Alexander, editor
- Published
- 2022
- Full Text
- View/download PDF
9. Mapping Across Relational Domains for Transfer Learning with Word Embeddings-Based Similarity
- Author
-
Luca, Thais, Paes, Aline, Zaverucha, Gerson, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Katzouris, Nikos, editor, and Artikis, Alexander, editor
- Published
- 2022
- Full Text
- View/download PDF
10. Generative Clausal Networks: Relational Decision Trees as Probabilistic Circuits
- Author
-
Ventola, Fabrizio, Dhami, Devendra Singh, Kersting, Kristian, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Katzouris, Nikos, editor, and Artikis, Alexander, editor
- Published
- 2022
- Full Text
- View/download PDF
11. Learning and reasoning with graph data
- Author
-
Manfred Jaeger
- Subjects
graph data ,representation learning ,statistical relational learning ,graph neural networks ,neuro-symbolic integration ,inductive logic programming ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Reasoning about graphs, and learning from graph data is a field of artificial intelligence that has recently received much attention in the machine learning areas of graph representation learning and graph neural networks. Graphs are also the underlying structures of interest in a wide range of more traditional fields ranging from logic-oriented knowledge representation and reasoning to graph kernels and statistical relational learning. In this review we outline a broad map and inventory of the field of learning and reasoning with graphs that spans the spectrum from reasoning in the form of logical deduction to learning node embeddings. To obtain a unified perspective on such a diverse landscape we introduce a simple and general semantic concept of a model that covers logic knowledge bases, graph neural networks, kernel support vector machines, and many other types of frameworks. Still at a high semantic level, we survey common strategies for model specification using probabilistic factorization and standard feature construction techniques. Based on this semantic foundation we introduce a taxonomy of reasoning tasks that casts problems ranging from transductive link prediction to asymptotic analysis of random graph models as queries of different complexities for a given model. Similarly, we express learning in different frameworks and settings in terms of a common statistical maximum likelihood principle. Overall, this review aims to provide a coherent conceptual framework that provides a basis for further theoretical analyses of respective strengths and limitations of different approaches to handling graph data, and that facilitates combination and integration of different modeling paradigms.
- Published
- 2023
- Full Text
- View/download PDF
12. The distribution semantics in probabilistic logic programming and probabilistic description logics: a survey.
- Author
-
Bellodi, Elena
- Subjects
- *
DESCRIPTION logics , *LOGIC programming , *SEMANTICS (Philosophy) , *MACHINE learning , *WEB-based user interfaces , *SEMANTICS - Abstract
Representing uncertain information is crucial for modeling real world domains. This has been fully recognized both in the field of Logic Programming and of Description Logics (DLs), with the introduction of probabilistic logic languages and various probabilistic extensions of DLs respectively. Several works have considered the distribution semantics as the underlying semantics of Probabilistic Logic Programming (PLP) languages and probabilistic DLs (PDLs), and have then targeted the problem of reasoning and learning in them. This paper is a survey of inference, parameter and structure learning algorithms for PLP languages and PDLs based on the distribution semantics. A few of these algorithms are also available as web applications. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
13. Building Practical Statistical Relational Learning Systems
- Author
-
Augustine, Eriq
- Subjects
Computer science ,Machine Learning ,Probabilistic Graphical Models ,Probabilistic Programming ,Scalability ,Statistical Relational Learning ,Usability - Abstract
In our increasingly connected world, data comes from many different sources, in many different forms, and is noisy, complex, and structured. To confront modern data, we need to embrace the structure inherent in the data and in the predictions. Statistical relational learning (SRL) is a subfield of machine learning that provides an effective means of approaching this problem of structured prediction. SRL frameworks use weighted logical and arithmetic expressions to easily create probabilistic graphical models (PGMs) to jointly reason over interdependent data. However, despite being well suited for modern, interconnected data, SRL faces several challenges that keep it from becoming practical and widely used in the machine learning community. In this dissertation, I address four pillars of practicality for SRL systems: scalability, expressivity, model adaptability, and usability. My work in this dissertation uses and extends Probabilistic Soft Logic (PSL), a state-of-the-art open-source SRL framework.Scalability in SRL systems is essential for using large datasets and complex models. Because of the complex nature of interconnected data, models can easily outgrow available memory spaces. To address scalability for SRL, I developed methods that more efficiently and intelligently instantiate PGMs from templates and data. I also developed fixed-memory inference methods that can perform inference on very large models without requiring a proportional amount of memory.Expressivity allows SRL systems to represent many different problems and data patterns. Because SRL uses logical and arithmetic expressions to represent structured dependencies, SRL frameworks need to be able to express more than just what is represented by feature vectors. To address expressivity for SRL, I created a system to incorporate neural models with structured SRL inference, and expanded the interpretation of PSL weight hyperparameters to include additional types of distributions.Model adaptability is the ability of SRL frameworks to handle models that change. A changing model can be as simple as a model that has its hyperparameters updated, or as complex as a model that changes its structure over time. To address model adaptability for SRL, I developed new weight learning approaches for PSL, and created a system for generalized online inference in PSL.Usability make SRL frameworks easy for people to use. Because of the need to model structural dependencies, SRL frameworks are often harder to use when compared to more common machine learning libraries. To address usability for SRL, I have created a new SRL framework that removes the tight coupling between the different components of the SRL pipeline that is seen in other SRL frameworks and allows the the recreation of exiting SRL frameworks along with the creation of new SRL frameworks using the same common runtime. Additionally, I developed a visual model inspector for analyzing and debugging PSL models.
- Published
- 2023
14. Noise Reduction in Distant Supervision for Relation Extraction Using Probabilistic Soft Logic
- Author
-
Kirsch, Birgit, Niyazova, Zamira, Mock, Michael, Rüping, Stefan, Barbosa, Simone Diniz Junqueira, Editorial Board Member, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Kotenko, Igor, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, Cellier, Peggy, editor, and Driessens, Kurt, editor
- Published
- 2020
- Full Text
- View/download PDF
15. An Ontology-Based Deep Learning Approach for Knowledge Graph Completion with Fresh Entities
- Author
-
Amador-Domínguez, Elvira, Hohenecker, Patrick, Lukasiewicz, Thomas, Manrique, Daniel, Serrano, Emilio, Kacprzyk, Janusz, Series Editor, Pal, Nikhil R., Advisory Editor, Bello Perez, Rafael, Advisory Editor, Corchado, Emilio S., Advisory Editor, Hagras, Hani, Advisory Editor, Kóczy, László T., Advisory Editor, Kreinovich, Vladik, Advisory Editor, Lin, Chin-Teng, Advisory Editor, Lu, Jie, Advisory Editor, Melin, Patricia, Advisory Editor, Nedjah, Nadia, Advisory Editor, Nguyen, Ngoc Thanh, Advisory Editor, Wang, Jun, Advisory Editor, Herrera, Francisco, editor, Matsui, Kenji, editor, and Rodríguez-González, Sara, editor
- Published
- 2020
- Full Text
- View/download PDF
16. Generating Random Logic Programs Using Constraint Programming
- Author
-
Dilkas, Paulius, Belle, Vaishak, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, and Simonis, Helmut, editor
- Published
- 2020
- Full Text
- View/download PDF
17. Integrating Linked Data search results using statistical relational learning approaches
- Author
-
Al Shekaili, Dhahi and Fernandes, Alvaro
- Subjects
005.7 ,Statistical Relational Learning ,Probabilistic Soft Logic ,Linked Data Search ,Markov Logic Networks - Abstract
Linked Data (LD) follows the web in providing low barriers to publication, and in deploying web-scale keyword search as a central way of identifying relevant data. As in the web, searchesinitially identify results in broadly the form in which they were published, and the published form may be provided to the user as the result of a search. This will be satisfactory in some cases, but the diversity of publishers means that the results of the search may be obtained from many different sources, and described in many different ways. As such, there seems to bean opportunity to add value to search results by providing userswith an integrated representation that brings together features from different sources. This involves an on-the-fly and automated data integration process being applied to search results, which raises the question as to what technologies might bemost suitable for supporting the integration of LD searchresults. In this thesis we take the view that the problem of integrating LD search results is best approached by assimilating different forms ofevidence that support the integration process. In particular, thisdissertation shows how Statistical Relational Learning (SRL) formalisms (viz., Markov Logic Networks (MLN) and Probabilistic Soft Logic (PSL)) can beexploited to assimilate different sources of evidence in a principledway and to beneficial effect for users. Specifically, in this dissertation weconsider syntactic evidence derived from LD search results and from matching algorithms, semantic evidence derived from LD vocabularies, and user evidence,in the form of feedback. This dissertation makes the following key contributions: (i) a characterisation of key features of LD search results that are relevant to their integration, and a description of some initial experiences in the use of MLN for interpreting search results; (ii)a PSL rule-base that models the uniform assimilation of diverse kinds of evidence;(iii) an empirical evaluation of how the contributed MLN and PSL approaches perform in terms of their ability to infer a structure for integrating LD search results;and (iv) concrete examples of how populating such inferred structures for presentation to the end user is beneficial, as well as guiding the collection of feedbackwhose assimilation further improves search results presentation.
- Published
- 2017
18. Linguistic Feature Representation with Statistical Relational Learning for Readability Assessment
- Author
-
Qiu, Xinying, Lu, Dawei, Shen, Yuming, Cai, Yi, Goos, Gerhard, Founding Editor, Hartmanis, Juris, Founding Editor, Bertino, Elisa, Editorial Board Member, Gao, Wen, Editorial Board Member, Steffen, Bernhard, Editorial Board Member, Woeginger, Gerhard, Editorial Board Member, Yung, Moti, Editorial Board Member, Tang, Jie, editor, Kan, Min-Yen, editor, Zhao, Dongyan, editor, Li, Sujian, editor, and Zan, Hongying, editor
- Published
- 2019
- Full Text
- View/download PDF
19. Lifted generative learning of Markov logic networks
- Author
-
Van Haaren, Jan, Van den Broeck, Guy, Meert, Wannes, and Davis, Jesse
- Subjects
Statistical relational learning ,Lifted probabilistic inference ,Markov logic networks ,Parameter learning ,Structure learning ,Artificial Intelligence and Image Processing ,Information Systems ,Cognitive Sciences ,Artificial Intelligence & Image Processing - Abstract
Markov logic networks (MLNs) are a well-known statistical relational learning formalism that combines Markov networks with first-order logic. MLNs attach weights to formulas in first-order logic. Learning MLNs from data is a challenging task as it requires searching through the huge space of possible theories. Additionally, evaluating a theory’s likelihood requires learning the weight of all formulas in the theory. This in turn requires performing probabilistic inference, which, in general, is intractable in MLNs. Lifted inference speeds up probabilistic inference by exploiting symmetries in a model. We explore how to use lifted inference when learning MLNs. Specifically, we investigate generative learning where the goal is to maximize the likelihood of the model given the data. First, we provide a generic algorithm for learning maximum likelihood weights that works with any exact lifted inference approach. In contrast, most existing approaches optimize approximate measures such as the pseudo-likelihood. Second, we provide a concrete parameter learning algorithm based on first-order knowledge compilation. Third, we propose a structure learning algorithm that learns liftable MLNs, which is the first MLN structure learning algorithm that exactly optimizes the likelihood of the model. Finally, we perform an empirical evaluation on three real-world datasets. Our parameter learning algorithm results in more accurate models than several competing approximate approaches. It learns more accurate models in terms of test-set log-likelihood as well as prediction tasks. Furthermore, our tractable learner outperforms intractable models on prediction tasks suggesting that liftable models are a powerful hypothesis space, which may be sufficient for many standard learning problems.
- Published
- 2016
20. Learning Distributional Programs for Relational Autocompletion.
- Author
-
KUMAR, NITESH, KUŽELKA, ONDŘEJ, and DE RAEDT, LUC
- Subjects
CONTINUOUS distributions ,LOGIC programming ,DISTRIBUTION (Probability theory) ,MISSING data (Statistics) ,STATISTICAL models ,EXPECTATION-maximization algorithms ,RELATIONAL databases - Abstract
Relational autocompletion is the problem of automatically filling out some missing values in multi-relational data. We tackle this problem within the probabilistic logic programming framework of Distributional Clauses (DCs), which supports both discrete and continuous probability distributions. Within this framework, we introduce DiceML – an approach to learn both the structure and the parameters of DC programs from relational data (with possibly missing data). To realize this, DiceML integrates statistical modeling and DCs with rule learning. The distinguishing features of DiceML are that it (1) tackles autocompletion in relational data, (2) learns DCs extended with statistical models, (3) deals with both discrete and continuous distributions, (4) can exploit background knowledge, and (5) uses an expectation–maximization-based (EM) algorithm to cope with missing data. The empirical results show the promise of the approach, even when there is missing data. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
21. Pruning strategies for the efficient traversal of the search space in PILP environments.
- Author
-
Côrte-Real, Joana, Dutra, Inês, and Rocha, Ricardo
- Subjects
SPACE environment ,INDUCTION (Logic) ,LOGIC programming ,STATISTICAL learning ,MODEL theory - Abstract
Probabilistic inductive logic programming (PILP) is a statistical relational learning technique which extends inductive logic programming by considering probabilistic data. The ability to use probabilities to represent uncertainty comes at the cost of an exponential evaluation time when composing theories to model the given problem. For this reason, PILP systems rely on various pruning strategies in order to reduce the search space. However, to the best of the authors' knowledge, there has been no systematic analysis of the different pruning strategies, how they impact the search space and how they interact with one another. This work presents a unified representation for PILP pruning strategies which enables end-users to understand how these strategies work both individually and combined and to make an informed decision on which pruning strategies to select so as to best achieve their goals. The performance of pruning strategies is evaluated both time and quality-wise in two state-of-the-art PILP systems with datasets from three different domains. Besides analysing the performance of the pruning strategies, we also illustrate the utility of PILP in one of the application domains, which is a real-world application. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
22. Structure learning for relational logistic regression: an ensemble approach.
- Author
-
Ramanan, Nandini, Kunapuli, Gautam, Khot, Tushar, Fatemi, Bahare, Kazemi, Seyed Mehran, Poole, David, Kersting, Kristian, and Natarajan, Sriraam
- Subjects
BOOSTING algorithms ,MACHINE learning ,STATISTICAL learning ,LOGISTIC regression analysis - Abstract
We consider the problem of learning Relational Logistic Regression (RLR). Unlike standard logistic regression, the features of RLR are first-order formulae with associated weight vectors instead of scalar weights. We turn the problem of learning RLR to learning these vector-weighted formulae and develop a learning algorithm based on the recently successful functional-gradient boosting methods for probabilistic logic models. We derive the functional gradients and show how weights can be learned simultaneously in an efficient manner. Our empirical evaluation on standard data sets demonstrates the superiority of our approach over other methods for learning RLR. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
23. Declarative Aspects in Explicative Data Mining for Computational Sensemaking
- Author
-
Atzmueller, Martin, Hutchison, David, Series Editor, Kanade, Takeo, Series Editor, Kittler, Josef, Series Editor, Kleinberg, Jon M., Series Editor, Mattern, Friedemann, Series Editor, Mitchell, John C., Series Editor, Naor, Moni, Series Editor, Pandu Rangan, C., Series Editor, Steffen, Bernhard, Series Editor, Terzopoulos, Demetri, Series Editor, Tygar, Doug, Series Editor, Weikum, Gerhard, Series Editor, Seipel, Dietmar, editor, Hanus, Michael, editor, and Abreu, Salvador, editor
- Published
- 2018
- Full Text
- View/download PDF
24. Quantified neural Markov logic networks.
- Author
-
Jung, Peter, Marra, Giuseppe, and Kuželka, Ondřej
- Subjects
- *
LOGIC , *POTENTIAL functions , *SELF-expression , *STATISTICAL learning , *EXPERTISE , *AGGREGATION (Statistics) - Abstract
Markov Logic Networks (MLNs) are discrete generative models in the exponential family. However, specifying these rules requires considerable expertise and can pose a significant challenge. To overcome this limitation, Neural MLNs (NMLNs) have been introduced, enabling the specification of potential functions as neural networks. Thanks to the compact representation of their neural potential functions, NMLNs have shown impressive performance in modeling complex domains like molecular data. Despite the superior performance of NMLNs, their theoretical expressiveness is still equivalent to that of MLNs without quantifiers. In this paper, we propose a new class of NMLN, called Quantified NMLN, that extends the expressivity of NMLNs to the quantified setting. Furthermore, we demonstrate how to leverage the neural nature of NMLNs to employ learnable aggregation functions as quantifiers, increasing expressivity even further. We demonstrate the competitiveness of Quantified NMLNs over original NMLNs and state-of-the-art diffusion models in molecule generation experiments. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
25. OpinionMine: A Bayesian-based framework for opinion mining using Twitter Data
- Author
-
Stefanos Zervoudakis, Emmanouil Marakakis, Haridimos Kondylakis, and Stefanos Goumas
- Subjects
Statistical Relational Learning ,Probabilistic rules ,Bayesian reasoning ,Twitter Data ,Incremental learning ,Cybernetics ,Q300-390 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
This article studies opinion mining from social media with probabilistic logic reasoning. As it is known, Twitter is one of the most active social networks, with millions of tweets sent daily, where multiple users express their opinion about traveling, economic issues, political decisions etc. As such, it offers a valuable source of information for opinion mining. In this paper we present OpinionMine, a Bayesian-based framework for opinion mining, exploiting Twitter Data. Initially, our framework imports Tweets massively by using Twitter’s API. Next, the imported Tweets are further processed automatically for constructing a set of untrained rules and random variables. Then, a Bayesian Network is derived by using the set of untrained rules, the random variables and an evidence set. After that, the trained model can be used for the evaluation of new Tweets. Finally, the constructed model can be retrained incrementally, thus becoming more robust. As application domain for the development of our methodology we have selected tourism because it is one of the most popular topics in social media. Our framework can predict users’ intention to visit a place. Among the advantages of our framework is that it follows an incremental learning strategy. That is, the derived model can be retrained incrementally with new training sets thus becoming more robust. Further, our framework can be easily adapted to opinion mining from social media on other topics, whereas the rules of the derived model are constructed in an efficient way and automatically.
- Published
- 2021
- Full Text
- View/download PDF
26. Statistical Relational Learning
- Author
-
De Raedt, Luc, Kersting, Kristian, Sammut, Claude, editor, and Webb, Geoffrey I., editor
- Published
- 2017
- Full Text
- View/download PDF
27. A taxonomy of weight learning methods for statistical relational learning
- Author
-
Srinivasan, Sriram, Dickens, Charles, Augustine, Eriq, Farnadi, Golnoosh, and Getoor, Lise
- Published
- 2022
- Full Text
- View/download PDF
28. Symbolic Learning and Reasoning With Noisy Data for Probabilistic Anchoring
- Author
-
Pedro Zuidberg Dos Martires, Nitesh Kumar, Andreas Persson, Amy Loutfi, and Luc De Raedt
- Subjects
semantic world modeling ,perceptual anchoring ,probabilistic anchoring ,statistical relational learning ,probabilistic logic programming ,object tracking ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Abstract
Robotic agents should be able to learn from sub-symbolic sensor data and, at the same time, be able to reason about objects and communicate with humans on a symbolic level. This raises the question of how to overcome the gap between symbolic and sub-symbolic artificial intelligence. We propose a semantic world modeling approach based on bottom-up object anchoring using an object-centered representation of the world. Perceptual anchoring processes continuous perceptual sensor data and maintains a correspondence to a symbolic representation. We extend the definitions of anchoring to handle multi-modal probability distributions and we couple the resulting symbol anchoring system to a probabilistic logic reasoner for performing inference. Furthermore, we use statistical relational learning to enable the anchoring framework to learn symbolic knowledge in the form of a set of probabilistic logic rules of the world from noisy and sub-symbolic sensor input. The resulting framework, which combines perceptual anchoring and statistical relational learning, is able to maintain a semantic world model of all the objects that have been perceived over time, while still exploiting the expressiveness of logical rules to reason about the state of objects which are not directly observed through sensory input data. To validate our approach we demonstrate, on the one hand, the ability of our system to perform probabilistic reasoning over multi-modal probability distributions, and on the other hand, the learning of probabilistic logical rules from anchored objects produced by perceptual observations. The learned logical rules are, subsequently, used to assess our proposed probabilistic anchoring procedure. We demonstrate our system in a setting involving object interactions where object occlusions arise and where probabilistic inference is needed to correctly anchor objects.
- Published
- 2020
- Full Text
- View/download PDF
29. Transfer learning by mapping and revising boosted relational dependency networks.
- Author
-
Azevedo Santos, Rodrigo, Paes, Aline, and Zaverucha, Gerson
- Subjects
STATISTICAL learning ,REGRESSION trees ,TECHNOLOGY transfer ,OPERATOR theory ,MACHINE learning ,ALGORITHMS - Abstract
Statistical machine learning algorithms usually assume the availability of data of considerable size to train the models. However, they would fail in addressing domains where data is difficult or expensive to obtain. Transfer learning has emerged to address this problem of learning from scarce data by relying on a model learned in a source domain where data is easy to obtain to be a starting point for the target domain. On the other hand, real-world data contains objects and their relations, usually gathered from noisy environments. Finding patterns through such uncertain relational data has been the focus of the Statistical Relational Learning (SRL) area. Thus, to address domains with scarce, relational, and uncertain data, in this paper, we propose TreeBoostler, an algorithm that transfers the SRL state-of-the-art Boosted Relational Dependency Networks learned in a source domain to the target domain. TreeBoostler first finds a mapping between pairs of predicates to accommodate the additive trees into the target vocabulary. After, it employs two theory revision operators devised to handle incorrect relational regression trees aiming at improving the performance of the mapped trees. In the experiments presented in this paper, TreeBoostler has successfully transferred knowledge between several distinct domains. Moreover, it performs comparably or better than learning from scratch methods in terms of accuracy and outperforms a transfer learning approach in terms of accuracy and runtime. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
30. Learning Efficiently in Semantic Based Regularization
- Author
-
Diligenti, Michelangelo, Gori, Marco, Scoca, Vincenzo, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Frasconi, Paolo, editor, Landwehr, Niels, editor, Manco, Giuseppe, editor, and Vreeken, Jilles, editor
- Published
- 2016
- Full Text
- View/download PDF
31. Processing Markov Logic Networks with GPUs: Accelerating Network Grounding
- Author
-
Martínez-Angeles, Carlos Alberto, Dutra, Inês, Costa, Vítor Santos, Buenabad-Chávez, Jorge, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Inoue, Katsumi, editor, Ohwada, Hayato, editor, and Yamamoto, Akihiro, editor
- Published
- 2016
- Full Text
- View/download PDF
32. Towards Fast And Accurate Structured Prediction
- Author
-
Srinivasan, Sriram
- Subjects
Computer science ,Large-Scale Inference ,Parameter Estimation ,Probabilistic Graphical Models ,Statistical Relational Learning ,Structured Prediction ,Weight Learning - Abstract
Complex tasks such as sequence labeling, collective classification, and activity recognition involve predicting outputs that are interdependent, a task known as structured prediction. Approaches such as structured support vector machines, conditional random fields, and statistical relational learning (SRL) frameworks are known to be effective at structured prediction. Of these approaches, SRL frameworks are unique as they combine ease of using logical statements with the power of probabilistic models. However, structured prediction using SRL frameworks face several challenges that affect both scalability and accuracy of these models. In this dissertation, I address four key challenges that significantly improves scalability and accuracy of SRL model at performing structured prediction task. First, structured prediction using large graphical models: graphical models generated through SRL frameworks for structured prediction tasks are often large, and this can make inference computationally expensive. Second, memory-constrained structured prediction: often performing structured prediction using large graphical models require a large amount of memory, which can be infeasible as some model sizes can grow to require terabytes of space. Third, real-time structured prediction: some applications require performing structured prediction tasks in real-time, which requires an extremely efficient inference engine that can perform model generation and inference in under a few milliseconds. Fourth, the optimization of arbitrary user-defined evaluation function: every application evaluates its structured prediction task via a unique evaluation function with arbitrary form, making it challenging to optimize them through well-known approaches such as gradient-descent. In this dissertation, I propose four general techniques that address these challenges and improve the scalability and accuracy of structured prediction tasks. First, I develop an approach to detect and exploit symmetries in large graphical models that make inference more tractable. Second, I develop a new framework that intertwines model generation and inference to perform structured prediction effectively. Further, this approach makes use of disk space and a smart in-memory cache to minimize the memory footprint and scale inference based on disk space rather than the main memory. Third, I derive a new inference procedure based on a second-order method, which reduces the inference time on small graphical models to under a millisecond enabling structured prediction to be performed online in real-time. Further, to demonstrate the effectiveness of this procedure, I introduce the concept of a micrograph to generate small effective graphical models and perform real-time structured prediction in the product search domain. Finally, I propose four parameter estimation approaches based on search strategies that can directly optimize any user-defined evaluation function by treating it as a black-box. These methods significantly improve the scalability and accuracy of structured prediction tasks and expand their scope to domains previously impossible.
- Published
- 2020
33. Evaluating and Extending Latent Methods for Link-Based Classification
- Author
-
McDowell, Luke K., Fleming, Aaron, Markel, Zane, Kacprzyk, Janusz, Series editor, Bouabana-Tebibel, Thouraya, editor, and Rubin, Stuart H., editor
- Published
- 2015
- Full Text
- View/download PDF
34. The Most Probable Explanation for Probabilistic Logic Programs with Annotated Disjunctions
- Author
-
Shterionov, Dimitar, Renkens, Joris, Vlasselaer, Jonas, Kimmig, Angelika, Meert, Wannes, Janssens, Gerda, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Davis, Jesse, editor, and Ramon, Jan, editor
- Published
- 2015
- Full Text
- View/download PDF
35. Extending Datalog Intelligence
- Author
-
Kimelfeld, Benny, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, ten Cate, Balder, editor, and Mileo, Alessandra, editor
- Published
- 2015
- Full Text
- View/download PDF
36. Graph Based Relational Features for Collective Classification
- Author
-
Bayer, Immanuel, Nagel, Uwe, Rendle, Steffen, Goebel, Randy, Series editor, Tanaka, Yuzuru, Series editor, Wahlster, Wolfgang, Series editor, Cao, Tru, editor, Lim, Ee-Peng, editor, Zhou, Zhi-Hua, editor, Ho, Tu-Bao, editor, Cheung, David, editor, and Motoda, Hiroshi, editor
- Published
- 2015
- Full Text
- View/download PDF
37. Editorial: Statistical Relational Artificial Intelligence
- Author
-
Fabrizio Riguzzi, Kristian Kersting, Marco Lippi, and Sriraam Natarajan
- Subjects
statistical relational artificial intelligence ,statistical relational learning ,graphical models ,logic in AI ,machine learning ,Mechanical engineering and machinery ,TJ1-1570 ,Electronic computers. Computer science ,QA75.5-76.95 - Published
- 2019
- Full Text
- View/download PDF
38. Distributional Clauses Particle Filter
- Author
-
Nitti, Davide, De Laet, Tinne, De Raedt, Luc, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Kobsa, Alfred, Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Goebel, Randy, Series editor, Tanaka, Yuzuru, Series editor, Wahlster, Wolfgang, Series editor, Siekmann, Jörg, Series editor, Calders, Toon, editor, Esposito, Floriana, editor, Hüllermeier, Eyke, editor, and Meo, Rosa, editor
- Published
- 2014
- Full Text
- View/download PDF
39. Inhibited Effects in CP-Logic
- Author
-
Meert, Wannes, Vennekens, Joost, Hutchison, David, Series editor, Kanade, Takeo, Series editor, Kittler, Josef, Series editor, Kleinberg, Jon M., Series editor, Kobsa, Alfred, Series editor, Mattern, Friedemann, Series editor, Mitchell, John C., Series editor, Naor, Moni, Series editor, Nierstrasz, Oscar, Series editor, Pandu Rangan, C., Series editor, Steffen, Bernhard, Series editor, Terzopoulos, Demetri, Series editor, Tygar, Doug, Series editor, Weikum, Gerhard, Series editor, Goebel, Randy, Series editor, Tanaka, Yuzuru, Series editor, Wahlster, Wolfgang, Series editor, Siekmann, Jörg, Series editor, van der Gaag, Linda C., editor, and Feelders, Ad J., editor
- Published
- 2014
- Full Text
- View/download PDF
40. Community Detection for Multiplex Social Networks Based on Relational Bayesian Networks
- Author
-
Jiang, Jiuchuan, Jaeger, Manfred, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Kobsa, Alfred, editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Tanaka, Yuzuru, editor, Wahlster, Wolfgang, editor, Siekmann, Jörg, editor, Andreasen, Troels, editor, Christiansen, Henning, editor, Cubero, Juan-Carlos, editor, and Raś, Zbigniew W., editor
- Published
- 2014
- Full Text
- View/download PDF
41. Counts-of-counts similarity for prediction and search in relational data.
- Author
-
Jaeger, Manfred, Lippi, Marco, Pellegrini, Giovanni, and Passerini, Andrea
- Subjects
NEAREST neighbor analysis (Statistics) ,DATA - Abstract
Defining appropriate distance functions is a crucial aspect of effective and efficient similarity-based prediction and retrieval. Relational data are especially challenging in this regard. By viewing relational data as multi-relational graphs, one can easily see that a distance between a pair of nodes can be defined in terms of a virtually unlimited class of features, including node attributes, attributes of node neighbors, structural aspects of the node neighborhood and arbitrary combinations of these properties. In this paper we propose a rich and flexible class of metrics on graph entities based on earth mover's distance applied to a hierarchy of complex counts-of-counts statistics. We further propose an approximate version of the distance using sums of marginal earth mover's distances. We show that the approximation is correct for many cases of practical interest and allows efficient nearest-neighbor retrieval when combined with a simple metric tree data structure. An experimental evaluation on two real-world scenarios highlights the flexibility of our framework for designing metrics representing different notions of similarity. Substantial improvements in similarity-based prediction are reported when compared to solutions based on state-of-the-art graph kernels. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
42. Online probabilistic theory revision from examples with ProPPR.
- Author
-
Guimarães, Victor, Paes, Aline, and Zaverucha, Gerson
- Subjects
REVISIONS ,MACHINE learning ,SOCIAL networks ,ONLINE education ,RELATIONAL databases ,STATISTICAL learning - Abstract
Handling relational data streams has become a crucial task, given the availability of pervasive sensors and Internet-produced content, such as social networks and knowledge graphs. In a relational environment, this is a particularly challenging task, since one cannot assure that the streams of examples are independent along the iterations. Thus, most relational learning systems are still designed to learn only from closed batches of data. Furthermore, in case there is a previously acquired model, these systems either would discard it or assuming it as correct. In this work, we propose an online relational learning algorithm that can handle continuous, open-ended streams of relational examples as they arrive. We employ techniques of theory revision to take advantage of the previously acquired model as a starting point, by finding where it should be modified to cope with the new examples, and automatically update it. We rely on the Hoeffding's bound statistical theory to decide if the model must, in fact, be updated in accordance with the new examples. The proposed algorithm is built upon ProPPR statistical relational language, aiming at contemplating the uncertainty inherent to real data. Experimental results in social networks and entity co-reference datasets show the potential of the proposed approach compared to other relational learners. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
43. Lifted discriminative learning of probabilistic logic programs.
- Author
-
Nguembang Fadja, Arnaud and Riguzzi, Fabrizio
- Subjects
LOGIC programming ,STATISTICAL learning ,ALGORITHMS ,MACHINE learning ,INDUCTIVE logic programming - Abstract
Probabilistic logic programming (PLP) provides a powerful tool for reasoning with uncertain relational models. However, learning probabilistic logic programs is expensive due to the high cost of inference. Among the proposals to overcome this problem, one of the most promising is lifted inference. In this paper we consider PLP models that are amenable to lifted inference and present an algorithm for performing parameter and structure learning of these models from positive and negative examples. We discuss parameter learning with EM and LBFGS and structure learning with LIFTCOVER, an algorithm similar to SLIPCOVER. The results of the comparison of LIFTCOVER with SLIPCOVER on 12 datasets show that it can achieve solutions of similar or better quality in a fraction of the time. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
44. Improving statistical relational learning with graph embeddings for socio-economic data retrieval.
- Author
-
Kalinin, Alexander, Vaganov, Danila, and Bochenina, Klavdiya
- Subjects
STATISTICAL learning ,INFORMATION retrieval ,RECOMMENDER systems ,GRAPHIC methods in statistics ,TARGET marketing ,SEARCH engines ,ECONOMIC databases - Abstract
Social media data is useful for personalized search engines, recommender systems, and targeted online marketing. Sometimes values of attributes are missing due to security reasons or problematic data collection process. In this case, the information about connections between vertices become more important since it explicitly allows for using the structure of a social graph for inferring missing attributes. One of the general and effective approaches of inferring missing attributes on graph structures are statistical relational learning. For machine learning tasks, the graph embeddings represent topological properties better, but they are not aimed at the attributes prediction. In this study, we introduce a method combining graph embeddings and the statistical relational learning. We consider different their combinations, as there are possible different hidden connections between social ties and considered attributes. We compare the performance using real data from the social network with different missing attributes and assortative patterns: gender, age, and economic status. As a result, the inclusion of graph embeddings in statistical relational learning improves accuracy and significantly decreases the number of iterations. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
45. MEBN-RM: A Mapping between Multi-Entity Bayesian Network and Relational Model.
- Author
-
Park, Cheol Young and Laskey, Kathryn Blackmond
- Subjects
RELATIONAL databases ,SITUATIONAL awareness ,KNOWLEDGE representation (Information theory) ,FIRST-order logic ,RANDOM variables ,SOFTWARE development tools - Abstract
Multi-Entity Bayesian Network (MEBN) is a knowledge representation formalism combining Bayesian Networks (BNs) with First-Order Logic (FOL). MEBN has sufficient expressive power for general-purpose knowledge representation and reasoning, and is the logical basis of Probabilistic Web Ontology Language (PR-OWL), a representation language for probabilistic ontologies. Developing an MEBN model to support a given application is a challenge, requiring definition of entities, relationships, random variables, conditional dependence relationships, and probability distributions. When available, data can be invaluable both to improve performance and to streamline development. By far the most common format for available data is the relational database (RDB). Relational databases describe and organize data according to the Relational Model (RM). Developing an MEBN model from data stored in an RDB therefore requires mapping between the two formalisms. This paper presents MEBN-RM, a set of mapping rules between key elements of MEBN and RM. We identify links between the two languages (RM and MEBN) and define four levels of mapping from elements of RM to elements of MEBN. These definitions are implemented in the MEBN-RM algorithm, which converts a relational schema in RM to a partial MEBN model. Through this research, the software has been released as an MEBN-RM open-source software tool. The method is illustrated through two example use cases using MEBN-RM to develop MEBN models: a Critical Infrastructure Defense System and a Smart Manufacturing System. Both systems are proof-of-concept systems used for situation awareness, where data coming from various sensors are stored in RDBs and converted into MEBN models through the MEBN-RM algorithm. In these use cases, we evaluate the performance of the MEBN-RM algorithm in terms of mapping speed and quality to show its efficiency in MEBN modeling. [ABSTRACT FROM AUTHOR]
- Published
- 2019
- Full Text
- View/download PDF
46. Determining the Number of Latent Factors in Statistical Multi-Relational Learning.
- Author
-
Chengchun Shi, Wenbin Lu, and Rui Song
- Subjects
- *
MACHINE learning , *LATENT class analysis (Statistics) , *RELATION algebras , *INTEGERS , *COMPUTER simulation , *MAXIMUM likelihood statistics - Abstract
Statistical relational learning is primarily concerned with learning and inferring relationships between entities in large-scale knowledge graphs. Nickel et al. (2011) proposed a RESCAL tensor factorization model for statistical relational learning, which achieves better or at least comparable results on common benchmark data sets when compared to other state-of-the-art methods. Given a positive integer s, RESCAL computes an s-dimensional latent vector for each entity. The latent factors can be further used for solving relational learning tasks, such as collective classification, collective entity resolution and link-based clustering. The focus of this paper is to determine the number of latent factors in the RESCAL model. Due to the structure of the RESCAL model, its log-likelihood function is not concave. As a result, the corresponding maximum likelihood estimators (MLEs) may not be consistent. Nonetheless, we design a specific pseudometric, prove the consistency of the MLEs under this pseudometric and establish its rate of convergence. Based on these results, we propose a general class of information criteria and prove their model selection consistencies when the number of relations is either bounded or diverges at a proper rate of the number of entities. Simulations and real data examples show that our proposed information criteria have good finite sample properties. [ABSTRACT FROM AUTHOR]
- Published
- 2019
47. A Decision-Support Tool for Renal Mass Classification.
- Author
-
Kunapuli, Gautam, Varghese, Bino A., Ganapathy, Priya, Desai, Bhushan, Cen, Steven, Aron, Manju, Gill, Inderbir, and Duddalwar, Vinay
- Subjects
ALGORITHMS ,COMPUTED tomography ,DECISION support systems ,MEDICAL databases ,INFORMATION storage & retrieval systems ,KIDNEY tumors ,MACHINE learning - Abstract
We investigate the viability of statistical relational machine learning algorithms for the task of identifying malignancy of renal masses using radiomics-based imaging features. Features characterizing the texture, signal intensity, and other relevant metrics of the renal mass were extracted from multiphase contrast-enhanced computed tomography images. The recently developed formalism of relational functional gradient boosting (RFGB) was used to learn human-interpretable models for classification. Experimental results demonstrate that RFGB outperforms many standard machine learning approaches as well as the current diagnostic gold standard of visual qualification by radiologists. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF
48. Hybrid attention mechanism for few‐shot relational learning of knowledge graphs
- Author
-
Fangqing Guo, Ruixin Ma, Liang Zhao, and Zeyang Li
- Subjects
One shot ,business.industry ,Computer science ,knowledge graph reasoning ,Computer applications to medicine. Medical informatics ,Statistical relational learning ,R858-859.7 ,one‐shot ,few‐shot ,QA76.75-76.765 ,Knowledge graph ,Shot (pellet) ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Computer software ,business ,attention mechanism ,Software ,Mechanism (sociology) - Abstract
Few‐shot knowledge graph (KG) reasoning is the main focus in the field of knowledge graph reasoning. In order to expand the application fields of the knowledge graph, a large number of studies are based on a large number of training samples. However, we have learnt that there are actually many missing relationships or entities in the knowledge graph, and in most cases, there are not many training instances when implementing new relationships. To tackle it, in this study, the authors aim to predict a new entity given few reference instances, even only one training instance. A few‐shot learning framework based on a hybrid attention mechanism is proposed. The framework employs traditional embedding models to extract knowledge, and uses an attenuated attention network and a self‐attention mechanism to obtain the hidden attributes of entities. Thus, it can learn a matching metric by considering both the learnt embeddings and one‐hop graph structures. The experimental results present that the model has achieved significant performance improvements on the NELL‐One and Wiki‐One datasets.
- Published
- 2021
49. Robot Reasoning Using First Order Bayesian Networks
- Author
-
Raza, Saleha, Haider, Sajjad, Williams, Mary-Anne, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Qin, Zengchang, editor, and Huynh, Van-Nam, editor
- Published
- 2013
- Full Text
- View/download PDF
50. Multi-Relational Learning for Recommendation of Matches between Semantic Structures
- Author
-
Szwabe, Andrzej, Misiorek, Pawel, Walkowiak, Przemyslaw, Hutchison, David, editor, Kanade, Takeo, editor, Kittler, Josef, editor, Kleinberg, Jon M., editor, Mattern, Friedemann, editor, Mitchell, John C., editor, Naor, Moni, editor, Nierstrasz, Oscar, editor, Pandu Rangan, C., editor, Steffen, Bernhard, editor, Sudan, Madhu, editor, Terzopoulos, Demetri, editor, Tygar, Doug, editor, Vardi, Moshe Y., editor, Weikum, Gerhard, editor, Goebel, Randy, editor, Siekmann, Jörg, editor, Wahlster, Wolfgang, editor, Graña, Manuel, editor, Toro, Carlos, editor, Howlett, Robert J., editor, and Jain, Lakhmi C., editor
- Published
- 2013
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.