7 results on '"Samwald Matthias"'
Search Results
2. Semantic SenseLab: Implementing the vision of the Semantic Web in neuroscience
- Author
-
Samwald, Matthias, Chen, Huajun, Ruttenberg, Alan, Lim, Ernest, Marenco, Luis, Miller, Perry, Shepherd, Gordon, and Cheung, Kei-Hoi
- Published
- 2010
- Full Text
- View/download PDF
3. Emerging practices for mapping and linking life sciences data using RDF — A case series.
- Author
-
Marshall, M. Scott, Boyce, Richard, Deus, Helena F., Zhao, Jun, Willighagen, Egon L., Samwald, Matthias, Pichler, Elgar, Hajagos, Janos, Prud’hommeaux, Eric, and Stephens, Susie
- Subjects
LIFE sciences ,DATA analysis ,RDF (Document markup language) ,TIME series analysis ,MEDICAL care ,MEDICAL informatics ,INFORMATION needs ,METADATA - Abstract
Abstract: Members of the W3C Health Care and Life Sciences Interest Group (HCLS IG) have published a variety of genomic and drug-related data sets as Resource Description Framework (RDF) triples. This experience has helped the interest group define a general data workflow for mapping health care and life science (HCLS) data to RDF and linking it with other Linked Data sources. This paper presents the workflow along with four case studies that demonstrate the workflow and addresses many of the challenges that may be faced when creating new Linked Data resources. The first case study describes the creation of linked RDF data from microarray data sets while the second discusses a linked RDF data set created from a knowledge base of drug therapies and drug targets. The third case study describes the creation of an RDF index of biomedical concepts present in unstructured clinical reports and how this index was linked to a drug side-effect knowledge base. The final case study describes the initial development of a linked data set from a knowledge base of small molecules. This paper also provides a detailed set of recommended practices for creating and publishing Linked Data sources in the HCLS domain in such a way that they are discoverable and usable by people, software agents, and applications. These practices are based on the cumulative experience of the Linked Open Drug Data (LODD) task force of the HCLS IG. While no single set of recommendations can address all of the heterogeneous information needs that exist within the HCLS domains, practitioners wishing to create Linked Data should find the recommendations useful for identifying the tools, techniques, and practices employed by earlier developers. In addition to clarifying available methods for producing Linked Data, the recommendations for metadata should also make the discovery and consumption of Linked Data easier. [Copyright &y& Elsevier]
- Published
- 2012
- Full Text
- View/download PDF
4. Post-hoc explanation of black-box classifiers using confident itemsets.
- Author
-
Moradi, Milad and Samwald, Matthias
- Subjects
- *
ARTIFICIAL intelligence , *EXPLANATION , *FORECASTING , *PREDICTION models , *CLASSIFICATION - Abstract
• Confident itemsets are used to discretize the whole model into subspaces. • Concise instance-wise explanations approximate the behavior of the black-box. • Class-wise explanations approximate the black-box's behavior in different subspaces. • Confident itemsets explanations improve the fidelity by 9.3% over other methods. • Confident itemsets explanations improve the interpretability by 8.8% Black-box Artificial Intelligence (AI) methods, e.g. deep neural networks, have been widely utilized to build predictive models that can extract complex relationships in a dataset and make predictions for new unseen data records. However, it is difficult to trust decisions made by such methods since their inner working and decision logic is hidden from the user. Explainable Artificial Intelligence (XAI) refers to systems that try to explain how a black-box AI model produces its outcomes. Post-hoc XAI methods approximate the behavior of a black-box by extracting relationships between feature values and the predictions. Perturbation-based and decision set methods are among commonly used post-hoc XAI systems. The former explanators rely on random perturbations of data records to build local or global linear models that explain individual predictions or the whole model. The latter explanators use those feature values that appear more frequently to construct a set of decision rules that produces the same outcomes as the target black-box. However, these two classes of XAI methods have some limitations. Random perturbations do not take into account the distribution of feature values in different subspaces, leading to misleading approximations. Decision sets only pay attention to frequent feature values and miss many important correlations between features and class labels that appear less frequently but accurately represent decision boundaries of the model. In this paper, we address the above challenges by proposing an explanation method named Confident Itemsets Explanation (CIE). We introduce confident itemsets, a set of feature values that are highly correlated to a specific class label. CIE utilizes confident itemsets to discretize the whole decision space of a model to smaller subspaces. Extracting important correlations between the features and the outcomes of the classifier in different subspaces, CIE produces instance-wise and class-wise explanations that accurately approximate the behavior of the target black-box. Conducting a set of experiments on various black-box classifiers, and different tabular and textual data classification tasks, we show that our CIE method performs better than the previous perturbation-based and rule-based explanators in terms of the descriptive accuracy (an improvement of 9.3%) and interpretability (an improvement of 8.8%) of the explanations. Subjective evaluations demonstrate that the users find the explanations of CIE more understandable and interpretable than those of the other comparison methods. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
5. Benchmarking neural embeddings for link prediction in knowledge graphs under semantic and structural changes.
- Author
-
Agibetov, Asan and Samwald, Matthias
- Abstract
Recently, link prediction algorithms based on neural embeddings have gained tremendous popularity in the Semantic Web community, and are extensively used for knowledge graph completion. While algorithmic advances have strongly focused on efficient ways of learning embeddings, fewer attention has been drawn to the different ways their performance and robustness can be evaluated. In this work we propose an open-source evaluation pipeline, which benchmarks the accuracy of neural embeddings in situations where knowledge graphs may experience semantic and structural changes. We define relation-centric connectivity measures that allow us to connect the link prediction capacity to the structure of the knowledge graph. Such an evaluation pipeline is especially important to simulate the accuracy of embeddings for knowledge graphs that are expected to be frequently updated. • Benchmark knowledge graph embeddings' accuracy under structural changes • Correlation of semantic similarity descriptors to performance of embeddings • Improve accuracy of embeddings by adding instances of semantically related relations • Error analysis: which type of links pose problems for link prediction [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF
6. Model-agnostic explainable artificial intelligence for object detection in image data.
- Author
-
Moradi, Milad, Yan, Ke, Colwell, David, Samwald, Matthias, and Asgari, Rhona
- Subjects
- *
OBJECT recognition (Computer vision) , *ARTIFICIAL intelligence , *COMPUTER vision , *DEEP learning - Published
- 2024
- Full Text
- View/download PDF
7. Relational local electroencephalography representations for sleep scoring.
- Author
-
Brandmayr, Georg, Hartmann, Manfred, Fürbass, Franz, Matz, Gerald, Samwald, Matthias, Kluge, Tilmann, and Dorffner, Georg
- Subjects
- *
ELECTROENCEPHALOGRAPHY , *EYE movements , *COMPUTATIONAL complexity , *POLYSOMNOGRAPHY - Abstract
Computational sleep scoring from multimodal neurophysiological time-series (polysomnography PSG) has achieved impressive clinical success. Models that use only a single electroencephalographic (EEG) channel from PSG have not yet received the same clinical recognition, since they lack Rapid Eye Movement (REM) scoring quality. The question whether this lack can be remedied at all remains an important one. We conjecture that predominant Long Short-Term Memory (LSTM) models do not adequately represent distant REM EEG segments (termed epochs), since LSTMs compress these to a fixed-size vector from separate past and future sequences. To this end, we introduce the EEG representation model ENGELBERT (electro En cephalo G raphic E poch L ocal B idirectional E ncoder R epresentations from T ransformer). It jointly attends to multiple EEG epochs from both past and future. Compared to typical token sequences in language, for which attention models have originally been conceived, overnight EEG sequences easily span more than 1000 30 s epochs. Local attention on overlapping windows reduces the critical quadratic computational complexity to linear, enabling versatile sub-one-hour to all-day scoring. ENGELBERT is at least one order of magnitude smaller than established LSTM models and is easy to train from scratch in a single phase. It surpassed state-of-the-art macro F1-scores in 3 single-EEG sleep scoring experiments. REM F1-scores were pushed to at least 86%. ENGELBERT virtually closed the gap to PSG-based methods from 4–5 percentage points (pp) to less than 1 pp F1-score. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.