Character Controllers Using Motion VAEs

ACM Transactions on Graphics (SIGGRAPH 2020)

HUNG YU LING, University of British Columbia FABIO ZINNO, Electronic Arts Vancouver GEORGE CHENG, Electronic Arts Vancouver MICHIEL VAN DE PANNE, University of British Columbia


Paper: PDF (11MB) / Code: GitHub / Demo: GitHub

A fundamental problem in computer animation is that of realizing purposeful and realistic human movement given a sufficiently-rich set of motion capture clips. We learn data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motion VAEs. The latent variables of the learned autoencoder define the action space for the movement and thereby govern its evolution over time. Planning or control algorithms can then use this action space to generate desired motions. In particular, we use deep reinforcement learning to learn controllers that achieve goal-directed movements. We demonstrate the effectiveness of the approach on multiple tasks. We further evaluate system-design choices and describe the current limitations of Motion VAEs.


WebGL Demo

This Motion VAE demo is running in the browser (requires WebGL) using ONNX.js and three.js. Please refer to the paper and video for other tasks and more detail.


  author    = {Ling, Hung Yu and Zinno, Fabio and Cheng, George and van de Panne, Michiel},
  title     = {Character Controllers Using Motion VAEs},
  journal   = {ACM Trans. Graph.},
  publisher = {ACM},
  volume    = {39},
  number    = {4},
  year      = {2020}