Back to Search
Start Over
Dual-Targeted adversarial example in evasion attack on graph neural networks.
- Source :
-
Scientific reports [Sci Rep] 2025 Jan 31; Vol. 15 (1), pp. 3912. Date of Electronic Publication: 2025 Jan 31. - Publication Year :
- 2025
-
Abstract
- This study proposes a novel approach for generating dual-targeted adversarial examples in Graph Neural Networks (GNNs), significantly advancing the field of graph-based adversarial attacks. Unlike traditional methods that focus on inducing specific misclassifications in a single model, our approach creates adversarial samples that can simultaneously target multiple models, each inducing distinct misclassifications. This innovation addresses a critical gap in existing techniques by enabling adversarial attacks that are capable of affecting various models with different objectives. We provide a detailed explanation of the method's principles and structure, rigorously evaluate its effectiveness across several GNN models, and visualize the impact using datasets such as Reddit and OGBN-Products. Our contributions highlight the potential for dual-targeted attacks to disrupt GNN performance and emphasize the need for enhanced defensive strategies in graph-based learning systems.<br />Competing Interests: Declarations. Conflicts of Interest: The authors declare that there are no conflicts of interest regarding the publication of this article. Ethical and informed consent for data used: All authors give ethical and informed consent. There are no human or animal experiments in this paper. Also, there is no copyrighted data related to the figures.<br /> (© 2025. The Author(s).)
Details
- Language :
- English
- ISSN :
- 2045-2322
- Volume :
- 15
- Issue :
- 1
- Database :
- MEDLINE
- Journal :
- Scientific reports
- Publication Type :
- Academic Journal
- Accession number :
- 39890835
- Full Text :
- https://doi.org/10.1038/s41598-025-85493-2