Back to Search Start Over

DGGAN: Depth-image Guided Generative Adversarial Networks for Disentangling RGB and Depth Images in 3D Hand Pose Estimation

Authors :
Chen, Liangjian
Lin, Shih-Yao
Xie, Yusheng
Lin, Yen-Yu
Fan, Wei
Xie, Xiaohui
Source :
2020 IEEE Winter Conference on Applications of Computer Vision (WACV)
Publication Year :
2020

Abstract

Estimating3D hand poses from RGB images is essentialto a wide range of potential applications, but is challengingowing to substantial ambiguity in the inference of depth in-formation from RGB images. State-of-the-art estimators ad-dress this problem by regularizing3D hand pose estimationmodels during training to enforce the consistency betweenthe predicted3D poses and the ground-truth depth maps.However, these estimators rely on both RGB images and thepaired depth maps during training. In this study, we proposea conditional generative adversarial network (GAN) model,called Depth-image Guided GAN (DGGAN), to generate re-alistic depth maps conditioned on the input RGB image, anduse the synthesized depth maps to regularize the3D handpose estimation model, therefore eliminating the need forground-truth depth maps. Experimental results on multiplebenchmark datasets show that the synthesized depth mapsproduced by DGGAN are quite effective in regularizing thepose estimation model, yielding new state-of-the-art resultsin estimation accuracy, notably reducing the mean3D end-point errors (EPE) by4.7%,16.5%, and6.8%on the RHD,STB and MHP datasets, respectively.

Details

Database :
arXiv
Journal :
2020 IEEE Winter Conference on Applications of Computer Vision (WACV)
Publication Type :
Report
Accession number :
edsarx.2012.03197
Document Type :
Working Paper