Neural Re-Simulation for Generating Bounces in Single Images
- Carlo Innamorati 1
- Bryan Russell 2
- Danny Kaufman 2
- Niloy J. Mitra 1,2
1University College London
2Adobe Research
ICCV 2019
Abstract
We introduce a method to generate videos of dynamic virtual objects plausibly interacting via collisions with a still image's environment. Given a starting trajectory, physically simulated with the estimated geometry of a single, static input image, we learn to 'correct' this trajectory to a visually plausible one via a neural network. The neural network can then be seen as learning to 'correct' traditional simulation output, generated with incomplete and imprecise world information, to obtain context-specific, visually plausible re-simulated output, a process we call neural re-simulation. We train our system on a set of 50k synthetic scenes where a virtual moving object (ball) has been physically simulated. We demonstrate our approach on both our synthetic dataset and a collection of real-life images depicting everyday scenes, obtaining consistent improvement over baseline alternatives throughout.
Video
Bibtex
@ARTICLE{InnamoratiEtAl:DynamicBounce:ICCV:2019, author = {Innamorati, Carlo and Russell, Bryan and Kaufman, Danny and Mitra, Niloy J.}, title = {Neural Re-Simulation for Generating Bounces in Single Images}, journal = {{ICCV}}, year = 2019 }
Acknowledgements
This work was supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 642841 and by the ERC Starting Grant SmartGeometry (StG-2013-335373).