Scene-aware Generative Network for Human Motion Synthesis

Jingbo Wang1
Sijie Yan1
Bo Dai2
Dahua Lin1

1.The Chinese University of Hong Kong
2.Nanyang Technological University

      Any questions are welcome to email Jingbo Wang: wj020@ie.cuhk.edu.hk


We revisit human motion synthesis, a task useful in various real-world applications, in this paper. Whereas a number of methods have been developed previously for this task, they are often limited in two aspects: 1) focus on the poses while leaving the location movement behind, and 2) ignore the impact of the environment on the human motion. In this paper, we propose a new framework, with the interaction between the scene and the human motion is taken into account. Considering the uncertainty of human motion, we formulate this task as a generative task, whose objective is to generate plausible human motion conditioned on both the scene and the human's initial position. This framework factorizes the distribution of human motions into a distribution of movement trajectories conditioned on scenes and that of body pose dynamics conditioned on both scenes and trajectories. We further derive a GAN-based learning approach, with discriminators to enforce the compatibility between the human motion and the contextual scene as well as the 3D-to-2D projection constraints.



Demo Video


BibTex

@article{wang2020motion,
     title={Scene-aware Generative Network for Human Motion Synthesis},
     author={Wang, Jingbo and Yan, Sijie and Dai, Bo and Lin, Dahua},
     inproceedings={CVPR},
     year={2021}
}