Back to Search Start Over

Motion-aware Contrastive Learning for Temporal Panoptic Scene Graph Generation

Authors :
Nguyen, Thong Thanh
Wu, Xiaobao
Bin, Yi
Nguyen, Cong-Duy T
Ng, See-Kiong
Luu, Anh Tuan
Publication Year :
2024

Abstract

To equip artificial intelligence with a comprehensive understanding towards a temporal world, video and 4D panoptic scene graph generation abstracts visual data into nodes to represent entities and edges to capture temporal relations. Existing methods encode entity masks tracked across temporal dimensions (mask tubes), then predict their relations with temporal pooling operation, which does not fully utilize the motion indicative of the entities' relation. To overcome this limitation, we introduce a contrastive representation learning framework that focuses on motion pattern for temporal scene graph generation. Firstly, our framework encourages the model to learn close representations for mask tubes of similar subject-relation-object triplets. Secondly, we seek to push apart mask tubes from their temporally shuffled versions. Moreover, we also learn distant representations for mask tubes belonging to the same video but different triplets. Extensive experiments show that our motion-aware contrastive framework significantly improves state-of-the-art methods on both video and 4D datasets.<br />Comment: Accepted at AAAI 2025

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2412.07160
Document Type :
Working Paper