Neural Geometry Processing via Spherical Neural Surfaces
- Romy Williamson1
- Niloy J. Mitra1,2
1University College London
       
2Adobe Research
[Arxiv]
[Supplemental]
We encode input genus-0 surfaces as overfitted neural networks and propose operators on them. Specifically, we describe how to compute the geometric Jacobian, the Fundamental Forms, and curvatures. We also define a Laplace Beltrami operator directly using the neural representation, enabling processing of scalar (or vector) fields and performing spectral analysis. Although the network has a finite-dimensional number of parameters, so the geometry representation is still a discretisation in one sense, we avoid any discretisation of the
differential operators, as commonly encountered while using a traditional surface representation (e.g., a polygonal mesh).
Abstract
Neural surfaces (e.g., neural map encoding, deep implicits and neural radiance fields) have recently gained popularity because of their generic structure (e.g., multi-layer perception) and easy integration with modern learning-based setups.
Traditionally, we have a rich toolbox of geometry processing algorithms designed for polygonal meshes to analyze and operate on surface geometry.
However, neural representations are typically discretized and converted into a mesh, before applying any geometry processing algorithm. This is unsatisfactory and, as we demonstrate, unnecessary. In this work, we propose a spherical neural surface representation (a spherical parametrization) for genus-0 surfaces and demonstrate how to compute core geometric operators directly on this representation. Namely, we show how to construct the normals and the first and second fundamental forms of the surface, and how to compute the surface gradient, surface divergence and Laplace-Beltrami operator on scalar/vector fields defined on the surface.
These operators, in turn, enable us to create geometry processing tools that act directly on the neural representations without any unnecessary discretization or meshing. We demonstrate illustrative applications in (neural) spectral analysis, heat flow and mean curvature flow, and our method shows robustness to isometric shape variations. We both propose theoretical formulations and validate their numerical estimates. By systematically linking neural surface representations with classical geometry processing algorithms, we believe this work can become a key ingredient in enabling neural geometry processing.
Method Overview
Given an input surface — a mesh in
this case — we progressively overfit an MLP to represent the individual surface.
The loss term is simply the MSE between the ground truth and predicted
surface positions, plus a (scaled) normals regularization term.
We compute the Jacobian of the input with repsect to the output, using autograd. This Jacobian tells us how tangent vectors on the sphere map to tangent vectors on the surface. The 'local' Jacobian tells us how vectors in local tangent-plane co-ordinates on the sphere map to tangent vectors on the surface. Then the normal to the surface must be orthogonal to both columns of the local Jacobian.
The main advantage of Spherical Neural Surfaces as a geometry representation is that it is extremely natural to compute many
important quantities from continuous differential geometry - without any need for approximation or discretization. This is thanks to
the automatic differentiation functionality built into modern machine learning setups. For example, to compute the outwards-pointing normal, we
simply compute the 3 × 3 Jacobian of the output with respect to the input, using autograd, and then turn this
into a Jacobian for a local 2D parametrization by choosing a suitable orthonormal basis for the tangent plane on the sphere.
After computing the normals function and the local Jacobian, we can use standard differential geometry formulas to compute the First and Second Fundamental form, and quantities derived from the Fundamental Forms, such as curvature and min/max curvature directions.
We compute differential quantities on an SNS overfitted to an analytically-defined shape, and compare the results to the ground truth for the analytic shape (computed using symbolic matlab). Please see the paper and supplemental materials for a quantitative evaluation.
We compute a continuous Laplace-Beltrami operator (LBO) via two formulations, and verify them aganist the cotan LBO on a dense mesh.
We have the capability to represent scalar fields on the surface, using another small MLP. The network describes a scalar field on the surface of the sphere, and this implicitly defines a scalar field on the surface, through composition with the SNS.
By viewing the eigenfunctions of the LBO as the solutions to a constrained energy-minimisation problem, and computing the surface integrals by Monte-Carlo estimation, we can find the smallest spectral modes of the LBO through gradient descent on the neural scalar field representation. This approach is novel and it demonstrates that, with modern tools, classic geometry processing tasks can be completed without reverting to a discrete representatation like a mesh.
The Spherical Harmonics are the eigenfunctions of the LBO on a sphere. Here, we show that the Spherical Harmonics are well-approximated by our method, and that the estimated energy levels/eigenvalues align closely with the known values (green).
Bibtex
@article{williamson2024spherical,
title = {Neural Geometry Processing via Spherical Neural Surfaces},
author = {Williamson, Romy and Mitra, Niloy J.},
year = {2024},
journal = {Arxiv}
}