1. A Quadruple Diffusion Convolutional Recurrent Network for Human Motion Prediction
- Author
-
Edmond S. L. Ho, Hubert P. H. Shum, Qianhui Men, and Howard Leung
- Subjects
Discriminator ,Computer science ,G400 ,02 engineering and technology ,Random walk ,Motion capture ,Motion (physics) ,Discontinuity (linguistics) ,Recurrent neural network ,0202 electrical engineering, electronic engineering, information engineering ,Media Technology ,Graph (abstract data type) ,020201 artificial intelligence & image processing ,Electrical and Electronic Engineering ,Hidden Markov model ,Algorithm - Abstract
Recurrent neural network (RNN) has become popular for human motion prediction thanks to its ability to capture temporal dependencies. However, it has limited capacity in modeling the complex spatial relationship in the human skeletal structure. In this work, we present a novel diffusion convolutional recurrent predictor for spatial and temporal movement forecasting, with multi-step random walks traversing bidirectionally along an adaptive graph to model interdependency among body joints. In the temporal domain, existing methods rely on a single forward predictor with the produced motion deflecting to the drift route, which leads to error accumulations over time. We propose to supplement the forward predictor with a forward discriminator to alleviate such motion drift in the long term under adversarial training. The solution is further enhanced by a backward predictor and a backward discriminator to effectively reduce the error, such that the system can also look into the past to improve the prediction at early frames. The two-way spatial diffusion convolutions and two-way temporal predictors together form a quadruple network.\ud \ud Furthermore, we train our framework by modeling the velocity from observed motion dynamics instead of static poses to predict future movements that effectively reduces the discontinuity problem at early prediction. Our method outperforms the state of the arts on both 3D and 2D datasets, including the Human3.6M, CMU Motion Capture and Penn Action datasets. The results also show that our method correctly predicts both high-dynamic and low-dynamic moving trends with less motion drift.
- Published
- 2021
- Full Text
- View/download PDF