Back to Search Start Over

Visual-semantic graph neural network with pose-position attentive learning for group activity recognition.

Authors :
Liu, Tianshan
Zhao, Rui
Lam, Kin-Man
Kong, Jun
Source :
Neurocomputing. Jun2022, Vol. 491, p217-231. 15p.
Publication Year :
2022

Abstract

• A pose-position attention strategy is proposed to update the bi-modal visual graph. • A linguistic-embedding-based semantic graph is presented to model label relations. • A semantic-preserving loss is designed to maintain the semantics consistency. • Both visual and semantic information are fused for group activity recognition. Video-based group activities typically contain interactive contexts among diverse visual modalities between multiple persons, and semantic relationships between individual actions. Nevertheless, majority of the existing methods for recognizing group activity either captures the relationships among different persons by utilizing a solely RGB modality or neglect to exploit the label hierarchies between individual actions and the group activity. To tackle these issues, we propose a visual-semantic graph neural network, with pose-position attentive learning (VSGNN-PAL), for group activity recognition. Specifically, we first extract the individual-level appearance and motion representations from RGB and optical-flow inputs, to build a bi-modal visual graph. Two attentive aggregators are further proposed to integrate both the pose and position information to measure the relevance scores between persons, and dynamically refine the representation of each visual node from both modality-specific and cross-modal perspectives. To model a semantic hierarchy from a label space, we construct a semantic graph based on the linguistic embeddings of individual actions and group activity labels. We further employ a bi-directional mapping learning scheme, to integrate the label-relation-aware semantic context into the visual representations. Besides, a global reasoning module is introduced to progressively generate the group-level representations with the scene description maintained. Furthermore, we formulate a semantic-preserving loss, to maintain the consistency between the learned high-level representations and the semantics of the ground-truth labels. Experimental results on three group activity benchmarks demonstrate that the proposed method achieves state-of-the-art performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
491
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
156588613
Full Text :
https://doi.org/10.1016/j.neucom.2022.03.066