Learning to Predict Motion from Videos via Long-term Extrapolation and Interpolation

1University of Oxford        2University College London        

We consider the problem of understanding and extrapolating mechanical phenomena with recurrent deep networks. (a) Experimental setup: an orthographic camera looks at a ball rolling in a 3D bowl. (b) Example of a 3D trajectory in the 3D bowl simulated using Blender 2.77’s OpenGL renderer. (c) An example of a rendered frame in the ‘Ellipse‘ experiment that is fed to our model as input.


While the basic laws of Newtonian mechanics are well understood, explaining a physical scenario still requires manually modeling the problem with suitable equations and associated parameters. In order to adopt such models for artificial intelligence, researchers have handcrafted the relevant states, and then used neural networks to learn the state transitions using simulation runs as training data. Unfortunately, such approaches can be unsuitable for modeling complex real-world scenarios, where manually authoring relevant state spaces tend to be challenging. In this work, we investigate if neural networks can implicitly learn physical states of real-world mechanical processes only based on visual data, and thus enable long-term physical extrapolation. We develop a recurrent neural network architecture for this task and also characterize resultant uncertainties in the form of evolving variance estimates. We evaluate our setup to extrapolate motion of a rolling ball inside a bowl of varying shape and orientation using only top-view images as input, and report competitive results with approaches that assume access to internal physics models and parameters.

ArXiv links
   author = {{S\'ebastien} Ehrhardt and Aron Monszpart and Andrea Vedaldi and Niloy {J. Mitra}},
    title = "{Learning to Represent Mechanics via Long-term Extrapolation and Interpolation}",
  journal = {arXiv preprint arXiv:1706.02179},
archivePrefix = "arXiv",
   eprint = {1706.02179},
     year = 2017,
    month = jun

   author = {{S\'ebastien} Ehrhardt and Aron Monszpart and Niloy {J. Mitra} and Andrea Vedaldi},
    title = "{Learning A Physical Long-term Predictor}",
  journal = {arXiv e-prints arXiv:1703.00247},
archivePrefix = "arXiv",
   eprint = {1703.00247},
     year = 2017,
    month = mar

Comparison of the extrapolation capabilities of different systems with and without angular velocity on 8 videos from the validation set.

Extrapolation capabilities of the system trained on a fixed heightfield on 6 example scenes from the validation set. Left column with insets: the first 4 input frames the system sees. Middle image: the system prediction of the object's position having seen the left images only. Right image: ground truth position.