1. Multi‐stream adaptive spatial‐temporal attention graph convolutional network for skeleton‐based action recognition
- Author
-
Yu Lubin, Qiliang Du, Jameel Ahmed Bhutto, and Lianfang Tian
- Subjects
business.industry ,Computer science ,Computer applications to medicine. Medical informatics ,R858-859.7 ,Multi stream ,Skeleton (category theory) ,computer vision ,Computer graphics ,Space-time adaptive processing ,QA76.75-76.765 ,convolutional neural nets ,graphics processing units ,computer graphics ,Action recognition ,Graph (abstract data type) ,Computer vision ,Computer Vision and Pattern Recognition ,Artificial intelligence ,Computer software ,business ,space‐time adaptive processing ,Software - Abstract
Skeleton‐based action recognition algorithms have been widely applied to human action recognition. Graph convolutional networks (GCNs) generalize convolutional neural networks (CNNs) to non‐Euclidean graphs and achieve significant performance in skeleton‐based action recognition. However, existing GCN‐based models have several issues, such as the topology of the graph is defined based on the natural skeleton of the human body, which is fixed during training, and it may not be applied to different layers of the GCN model and diverse datasets. Besides, the higher‐order information of the joint data, for example, skeleton and dynamic information is not fully utilised. This work proposes a novel multi‐stream adaptive spatial‐temporal attention GCN model that overcomes the aforementioned issues. The method designs a learnable topology graph to adaptively adjust the connection relationship and strength, which is updated with training along with other network parameters. Simultaneously, the adaptive connection parameters are utilised to optimise the connection of the natural skeleton graph and the adaptive topology graph. The spatial‐temporal attention module is embedded in each graph convolution layer to ensure that the network focuses on the more critical joints and frames. A multi‐stream framework is built to integrate multiple inputs, which further improves the performance of the network. The final network achieves state‐of‐the‐art performance on both the NTU‐RGBD and Kinetics‐Skeleton action recognition datasets. The simulation results prove that the proposed method reveals better results than existing methods in all perspectives and that shows the superiority of the proposed method.
- Published
- 2022