Motion Guided 3D Pose Estimation from Videos

Jingbo Wang1
Sijie Yan1
Yuanjun Xiong2
Dahua Lin1

1.The Chinese University of Hong Kong
2.Amazon/AWS AI

      Any questions are welcome to email Jingbo Wang: jbwang@ie.cuhk.edu.hk


News

[news] (2020.07.03) The paper is accepted by ECCV 2020. We'll release codes and pre-trained models in our MMSkeleton .





We propose a new loss function, called motion loss, for the problem of monocular 3D Human pose estimation from videos. In computing motion loss, a simple yet effective representation for keypoint motion, called pairwise motion encoding, is introduced. We design a new graph convolutional network architecture, U-shaped GCN (UGCN). It captures both short-term and long-term motion information to fully leverage the additional supervision from the motion loss. We experiment training UGCN with the motion loss on two large scale benchmarks: Human3.6M and MPI-INF-3DHP. Our model surpasses other state-ofthe-art models by a large margin. It also demonstrates strong capacity in producing smooth 3D sequences and recovering keypoint motion.



Demo Video



Paper

Motion Guided 3D Pose Estimation from Videos

Jingbo Wang, Sijie Yan, Yuanjun Xiong, and Dahua Lin

European Conference on Computer Vision (ECCV), 2020

BibTex

@article{wang2020motion,
     title={Motion Guided 3D Pose Estimation from Videos},
     author={Wang, Jingbo and Yan, Sijie and Xiong, Yuanjun and Lin, Dahua},
     journal={arXiv preprint arXiv:2004.13985},
     year={2020}
}