Back to Search Start Over

Hawk: Learning to Understand Open-World Video Anomalies

Authors :
Tang, Jiaqi
Lu, Hao
Wu, Ruizheng
Xu, Xiaogang
Ma, Ke
Fang, Cheng
Guo, Bin
Lu, Jiangbo
Chen, Qifeng
Chen, Ying-Cong
Tang, Jiaqi
Lu, Hao
Wu, Ruizheng
Xu, Xiaogang
Ma, Ke
Fang, Cheng
Guo, Bin
Lu, Jiangbo
Chen, Qifeng
Chen, Ying-Cong
Publication Year :
2024

Abstract

Video Anomaly Detection (VAD) systems can autonomously monitor and identify disturbances, reducing the need for manual labor and associated costs. However, current VAD systems are often limited by their superficial semantic understanding of scenes and minimal user interaction. Additionally, the prevalent data scarcity in existing datasets restricts their applicability in open-world scenarios. In this paper, we introduce Hawk, a novel framework that leverages interactive large Visual Language Models (VLM) to interpret video anomalies precisely. Recognizing the difference in motion information between abnormal and normal videos, Hawk explicitly integrates motion modality to enhance anomaly identification. To reinforce motion attention, we construct an auxiliary consistency loss within the motion and video space, guiding the video branch to focus on the motion modality. Moreover, to improve the interpretation of motion-to-language, we establish a clear supervisory relationship between motion and its linguistic representation. Furthermore, we have annotated over 8,000 anomaly videos with language descriptions, enabling effective training across diverse open-world scenarios, and also created 8,000 question-answering pairs for users' open-world questions. The final results demonstrate that Hawk achieves SOTA performance, surpassing existing baselines in both video description generation and question-answering. Our codes/dataset/demo will be released at https://github.com/jqtangust/hawk.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438560893
Document Type :
Electronic Resource