Neural Geometry Processing via Spherical Neural Surfaces

1University College London         2Adobe Research

[Arxiv 2024] [Code]

We encode input genus-0 surfaces as overfitted neural networks and propose operators on them. Specifically, we describe how to compute the geometric Jacobian and the First and Second Fundamental Forms, and hence compute curvatures. We also define a Laplace Beltrami operator directly using the neural representation, thus enabling processing of scalar (or vector) fields on the underlying surface as well as performing spectral analysis. We avoid any unnecessary discretization, as commonly encountered while using a traditional surface representation (e.g., a polygonal mesh).


Abstract

Neural surfaces (e.g., neural map encoding, deep implicits and neural radiance fields) have recently gained popularity because of their generic structure (e.g., multi-layer perception) and easy integration with modern learning-based setups. Traditionally, we have a rich toolbox of geometry processing algorithms designed for polygonal meshes to analyze and operate on surface geometry. However, neural representations are typically discretized and converted into a mesh, before applying any geometry processing algorithm. This is unsatisfactory and, as we demonstrate, unnecessary. In this work, we propose a spherical neural surface representation (a spherical parametrization) for genus-0 surfaces and demonstrate how to compute core geometric operators directly on this representation. Namely, we show how to construct the normals and the first and second fundamental forms of the surface, and how to compute the surface gradient, surface divergence and Laplace-Beltrami operator on scalar/vector fields defined on the surface.

These operators, in turn, enable us to create geometry processing tools that act directly on the neural representations without any unnecessary discretization or meshing. We demonstrate illustrative applications in (neural) spectral analysis, heat flow and mean curvature flow, and our method shows robustness to isometric shape variations. We both propose theoretical formulations and validate their numerical estimates. By systematically linking neural surface representations with classical geometry processing algorithms, we believe this work can become a key ingredient in enabling neural geometry processing.


Method Overview

Given an input surface — a mesh in this case — we progressively overfit an MLP to represent the individual surface. The loss term is simply the MSE between the ground truth and predicted surface positions, plus a (scaled) normals regularization term.

The main advantage of Spherical Neural Surfaces as a geome- try representation is that it is extremely natural to compute many important quantities from continuous differential geometry - without any need for approximation or discretization. This is thanks to the automatic differentiation functionality built into modern machine learning setups. For example, to compute the outwards-pointing normal, we simply compute the 3 × 3 Jacobian of the output with respect to the input, using autograd, and then turn this into a Jacobian for a local 2D parametrization by choosing a suitable orthonormal basis for the tangent plane on the sphere.

After computing the normals function and the local Jacobian, we can use standard differential geometry formulas to compute the First and Second Fundamental form, and quantities derived from the Fundamental Forms, such as curvature and min/max curvature directions.


We compute differential quantities on an SNS overfitted to an analytically-defined shape, and compare the results to the ground truth for the analytic shape (computed using symbolic matlab).
We can compute surface gradient, surface divergence and a continuous Laplace-Beltrami operator (LBO).

We have the capability to represent scalar fields on the surface, using another small MLP. The network describes a scalar field on the surface of the sphere, and this implicitly defines a scalar field on the surface, through composition with the SNS.
By viewing the eigenfunctions of the LBO as the solutions to a constrained energy-minimisation problem, and computing the surface integrals by Monte-Carlo estimation, we can find the smallest spectral modes of the LBO through gradient descent on the neural scalar field representation. This approach is novel and it demonstrates that, with modern tools, classic geometry processing tasks can be completed without reverting to a discrete representatation like a mesh.
The Spherical Harmonics are the eigenfunctions of the LBO on a sphere. Here, we show that the Spherical Harmonics are well-approximated by our method, and that the estimated energy levels/eigenvalues align closely with the known values (green).
Bibtex

@article{williamson2024spherical,
  title   = {Neural Geometry Processing via Spherical Neural Surfaces},
  author  = {Williamson, Romy and Mitra, Niloy J.},
  year    = {2024},
  journal = {Arxiv}
}
      
Links