Escaping Plato's Cave: 3D Shape From Adversarial Rendering

University College London

ICCV 2019

We encode a 2D input image using an encoder E into a latent code z and feed it to a generator G to produce a 3D volume. This 3D volume is inserted into a rendering layer R to produce a 2D rendered image which is presented to a discriminator D. The rendering layer is controlled by an image formation model: visual hull (VH), absorption-only (AO) or emission-absorption (EA) and view sampling. The discriminator D is trained to distinguish such rendered imagery from an unstructured 2D photo collection, i. e., images of the same class of objects, but not necessarily having repeated instances, view or lighting and with no assumptions about their relation (e.g., annotated feature points, view specifications).


We introduce PLATONICGAN to discover the 3D structure of an object class from an unstructured collection of 2D images, i.e. neither any relation between the images is available nor additional information about the images is known. The key idea is to train a deep neural network to generate 3D shapes which rendered to images are indistinguishable from ground truth images (for a discriminator) under various camera models (i.e. rendering layers) and camera poses. Discriminating 2D images instead of 3D shapes allows tapping into unstructured 2D photo collections instead of relying on curated (e.g., aligned, annotated, etc.) 3D data sets. To establish constraints between 2D image observation and their 3D interpretation, we suggest a family of rendering layers that are effectively differentiable. This family includes visual hull, absorption-only (akin to x-ray), and emission-absorption. We can successfully reconstruct 3D shapes from unstructured 2D images and extensively evaluate PLATONICGAN on a range of synthetic and real data sets achieving consistent improvements over baseline methods. We can also show that our method with additional 3D supervision further improves result quality and even surpasses the performance of 3D supervised methods.


Visual results for 3D reconstruction of three classes (airplane, chair, rifle) from multiple views.

3D Reconstruction of different trees using the emissionabsorption image formation model, seen from different views (columns). The small images were used as input. We see that PLATONICGAN has understood the 3D structure, including a distinctly colored stem, fractal geometry and structured leave textures.


Thanks to Pengsheng Guo (mail) for pointing out that we had a slightly incorrect chamfer distance error metric. When calculating the chamfer distance we did not invert the binary target voxels (target = 1 - target) before calling scipy.ndimage.distance_transform_edt which returns the distance to the closest background (not the foreground) element. Thus, we have corrected the error calculation, see code and also updated Tbl. 2 in this updated arxiv version. (For comparison, the see the previous version). Note, conclusions remain unaffected.

  title={Escaping Plato's Cave: 3D Shape From Adversarial Rendering},
  author={Henzler, Philipp and Mitra, Niloy J and and Ritschel, Tobias},
  booktitle={Computer Vision (ICCV), 2019 IEEE International Conference on Computer Vision},