DepthCut: Improved Depth Edge Estimation Using Multiple Unreliable Channels

1University College London     2Adobe Research

arXiv

We present DepthCut, a method to estimate depth edges with improved accuracy from unreliable input channels, namely: RGB images, normal estimates, and disparity estimates. Starting from a single image or pair of images, our method produces depth edges consisting of depth contours and creases, and separates regions of smoothly varying depth. Complementary information from the unreliable input channels are fused using a neural network trained on a dataset with known depth. The resulting depth edges can be used to refine a disparity estimate or to infer a hierarchical image segmentation.


Abstract

In the context of scene understanding, a variety of methods exists to estimate different information channels from mono or stereo images, including disparity, depth, and normals. Although several advances have been reported in the recent years for these tasks, the estimated information is often imprecise particularly near depth discontinuities or creases. Studies have however shown that precisely such depth edges carry critical cues for the perception of shape, and play important roles in tasks like depth-based segmentation or foreground selection. Unfortunately, the currently extracted channels often carry conflicting signals, making it difficult for subsequent applications to effectively use them. In this paper, we focus on the problem of obtaining high-precision depth edges (i.e., depth contours and creases) by jointly analyzing such unreliable information channels. We propose DepthCut, a data-driven fusion of the channels using a convolutional neural network trained on a large dataset with known depth. The resulting depth edges can be used for segmentation, decomposing a scene into depth layers with relatively flat depth, or improving the accuracy of the depth estimate near depth edges by constraining its gradients to agree with these edges. Quantitatively, we compare against 15 variants of baselines and demonstrate that our depth edges result in an improved segmentation performance and an improved depth estimate near depth edges compared to data-agnostic channel fusion. Qualitatively, we demonstrate that the depth edges result in superior segmentation and depth orderings.


Bibtex
@misc{GuerreroEtAl:DepthCut:2017,
  title   = {{DepthCut}: Improved Depth Edge Estimation Using Multiple Unreliable Channels}, 
  author  = {Paul Guerrero and Holger Winnem{\"o}ller and Wilmot Li and Niloy J. Mitra},
  year = {2017},
  eprint = {arXiv:1705.07844},
}
      
Links

Paper (8.96 MB)

Supplementary Material (234 MB)

Data not yet available

Code not yet available

Slides not available