Utilizing differentiable physics simulators (DPS), DiffMimic simplifies policy learning into a state matching problem, providing faster and more stable convergence than reinforcement learning-based techniques. With the Demonstration Replay mechanism, DiffMimic avoids local optima and outperforms methods in sample and time efficiency, enabling characters to learn complex motions rapidly. This approach has potential to advance future differentiable animation systems.
DiffMimic enables the replication of human videos captured in natural, uncontrolled environments, including dynamic dance movements.
DiffMimic's capabilities extend beyond just mimicking simple humanoid motions, as it also supports motion replication for a diverse range of characters and scenarios. This includes humans wielding weapons such as swords and shields, as well as non-human model like ants.
DiffMimic is robust against external perturbations, showcasing its adaptability and resilience in varying conditions. We generate random boxes that are used to strike humanoid characters, simulating the effects of unexpected external forces on their movements.
@article{ren2022diffmimic,
author = {Ren, Jiawei and Yu, Cunjun and Chen, Siwei and Ma, Xiao and Pan, Liang and Liu, Ziwei},
title = {DiffMimic: Efficient Motion Mimicking with Differentiable Physics},
journal = {ICLR},
year = {2022},
}