*Check out the following tutorial on ReLU Fields included in the latest release of Mitsuba(v3.0).
Mitsuba tutorialIn many recent works, multi-layer perceptions (MLPs) have been shown to be suitable for modeling complex spatially-varying functions including images and 3D scenes. Although the MLPs are able to represent complex scenes with unprecedented quality and memory footprint, this expressive power of the MLPs, however, comes at the cost of long training and inference times. On the other hand, bilinear/trilinear interpolation on regular grid based representations can give fast training and inference times, but cannot match the quality of MLPs without requiring significant additional memory. Hence, in this work, we investigate what is the smallest change to grid-based representations that allows for retaining the high fidelity result of MLPs while enabling fast reconstruction and rendering times. We introduce a surprisingly simple change that achieves this task – simply allowing a fixed non-linearity (ReLU) on interpolated grid values. When combined with coarse to-fine optimization, we show that such an approach becomes competitive with the state-of-the-art. We report results on radiance fields, and occupancy fields, and compare against multiple existing alternatives.
Representing a ground-truth function (blue) in a 1D (a) and 2D (b)
grid cell using the linear
basis (yellow) and a ReLUField
(pink). The reference has a c1-discontinuity inside the domain that a
linear basis cannot capture. A ReLUField will pick two values
We look for a representation of
As a didactic example, we fit an image into a 2D ReLU-Field grid similar to SIREN,
where grid values are stored as floats in the
Evaluation results on modeling 3D geometries as occupancy fields. Metric used is Volumetric-IoU. The baseline MLP is our implementation of OccupancyNetworks.
Evaluation results on 3D synthetic scenes. Metrics used are PSNR (↑) / LPIPS (↓). The column NeRF-TF* quotes PSNR values from prior work, and as such we do not have a comparable runtime for this method.
Qualitative comparison between NeRF-PT, Grid and ReLUField . Grid-based versions converge much faster, and we can see significant sharpness improvements of ReLUField over Grid , for example in the leaves of the plant.
@inproceedings{
ReluField_sigg_22,
author = {Karnewar, Animesh and Ritschel, Tobias and Wang, Oliver and Mitra, Niloy},
title = {ReLU Fields: The Little Non-Linearity That Could},
year = {2022},
isbn = {9781450393379},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3528233.3530707},
doi = {10.1145/3528233.3530707},
booktitle = {ACM SIGGRAPH 2022 Conference Proceedings},
articleno = {27},
numpages = {9},
keywords = {spatial representations, volume rendering,
neural representations, regular data structures},
location = {Vancouver, BC, Canada},
series = {SIGGRAPH '22}
}
The research was partially supported by the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No. 956585, gifts from Adobe, and the UCL AI Centre.
This project-web-page has been built using the super-cool HyperNeRF's project-webpage as a template.