Back to Search Start Over

Normality Learning-based Graph Anomaly Detection via Multi-Scale Contrastive Learning

Authors :
Duan, Jingcan
Zhang, Pei
Wang, Siwei
Hu, Jingtao
Jin, Hu
Zhang, Jiaxin
Zhou, Haifang
Liu, Xinwang
Publication Year :
2023

Abstract

Graph anomaly detection (GAD) has attracted increasing attention in machine learning and data mining. Recent works have mainly focused on how to capture richer information to improve the quality of node embeddings for GAD. Despite their significant advances in detection performance, there is still a relative dearth of research on the properties of the task. GAD aims to discern the anomalies that deviate from most nodes. However, the model is prone to learn the pattern of normal samples which make up the majority of samples. Meanwhile, anomalies can be easily detected when their behaviors differ from normality. Therefore, the performance can be further improved by enhancing the ability to learn the normal pattern. To this end, we propose a normality learning-based GAD framework via multi-scale contrastive learning networks (NLGAD for abbreviation). Specifically, we first initialize the model with the contrastive networks on different scales. To provide sufficient and reliable normal nodes for normality learning, we design an effective hybrid strategy for normality selection. Finally, the model is refined with the only input of reliable normal nodes and learns a more accurate estimate of normality so that anomalous nodes can be more easily distinguished. Eventually, extensive experiments on six benchmark graph datasets demonstrate the effectiveness of our normality learning-based scheme on GAD. Notably, the proposed algorithm improves the detection performance (up to 5.89% AUC gain) compared with the state-of-the-art methods. The source code is released at https://github.com/FelixDJC/NLGAD.<br />Comment: 10 pages, 7 figures, accepted by ACM MM 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.06034
Document Type :
Working Paper