Back to Search
Start Over
Can Pose Transfer Models Generate Realistic Human Motion?
- Publication Year :
- 2025
-
Abstract
- Recent pose-transfer methods aim to generate temporally consistent and fully controllable videos of human action where the motion from a reference video is reenacted by a new identity. We evaluate three state-of-the-art pose-transfer methods -- AnimateAnyone, MagicAnimate, and ExAvatar -- by generating videos with actions and identities outside the training distribution and conducting a participant study about the quality of these videos. In a controlled environment of 20 distinct human actions, we find that participants, presented with the pose-transferred videos, correctly identify the desired action only 42.92% of the time. Moreover, the participants find the actions in the generated videos consistent with the reference (source) videos only 36.46% of the time. These results vary by method: participants find the splatting-based ExAvatar more consistent and photorealistic than the diffusion-based AnimateAnyone and MagicAnimate.<br />Comment: Data and code available at https://github.com/matyasbohacek/pose-transfer-human-motion
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2501.15648
- Document Type :
- Working Paper