Back to Search Start Over

Navigating Open Set Scenarios for Skeleton-based Action Recognition

Authors :
Peng, Kunyu
Yin, Cheng
Zheng, Junwei
Liu, Ruiping
Schneider, David
Zhang, Jiaming
Yang, Kailun
Sarfraz, M. Saquib
Stiefelhagen, Rainer
Roitberg, Alina
Publication Year :
2023

Abstract

In real-world scenarios, human actions often fall outside the distribution of training data, making it crucial for models to recognize known actions and reject unknown ones. However, using pure skeleton data in such open-set conditions poses challenges due to the lack of visual background cues and the distinct sparse structure of body pose sequences. In this paper, we tackle the unexplored Open-Set Skeleton-based Action Recognition (OS-SAR) task and formalize the benchmark on three skeleton-based datasets. We assess the performance of seven established open-set approaches on our task and identify their limits and critical generalization issues when dealing with skeleton information. To address these challenges, we propose a distance-based cross-modality ensemble method that leverages the cross-modal alignment of skeleton joints, bones, and velocities to achieve superior open-set recognition performance. We refer to the key idea as CrossMax - an approach that utilizes a novel cross-modality mean max discrepancy suppression mechanism to align latent spaces during training and a cross-modality distance-based logits refinement method during testing. CrossMax outperforms existing approaches and consistently yields state-of-the-art results across all datasets and backbones. The benchmark, code, and models will be released at https://github.com/KPeng9510/OS-SAR.<br />Comment: Accepted to AAAI 2024. The benchmark, code, and models will be released at https://github.com/KPeng9510/OS-SAR

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.06330
Document Type :
Working Paper