We propose a generative model of 2D and 3D natural textures with diversity, visual fidelity and at high computational efficiency.
This is enabled by a family of methods that extend ideas from classic stochastic procedural texturing (Perlin noise) to learned, deep, non-linearities.
The key idea is a hard-coded, tunable and differentiable step that feeds multiple transformed random 2D or 3D fields into an MLP that can be sampled over infinite domains.
Our model encodes all exemplars from a diverse set of textures without a need to be re-trained for each exemplar.
Applications include texture interpolation, and learning 3D textures from 2D exemplars
Overview of our approach, comprising of three mainparts: The first is an encodergthat takes as input texture imagesyand generates a compact latent code z (orange). A small translation network h converts this latent code into parameters p that condition a non-convolutional (MLP) decoder (dotted) f that takes noise sampled with learned transformations (green) and maps this to appearance (pink) that has the same statistics as the exemplar (blue).
2D Results (Click on textures to see competitor comparison)