Single-Image Depth Inference Using Generative Adversarial Networks

Daniel Stanley Tan, Chih-Yuan Yao, Conrado Ruiz, Kai-Lung Hua*

*Corresponding author for this work

Research output: Contribution to journalSpecial issueAcademicpeer-review

Abstract

Depth has been a valuable piece of information for perception tasks such as robot grasping, obstacle avoidance, and navigation, which are essential tasks for developing smart homes and smart cities. However, not all applications have the luxury of using depth sensors or multiple cameras to obtain depth information. In this paper, we tackle the problem of estimating the per-pixel depths from a single image. Inspired by the recent works on generative neural network models, we formulate the task of depth estimation as a generative task where we synthesize an image of the depth map from a single Red, Green, and Blue (RGB) input image. We propose a novel generative adversarial network that has an encoder-decoder type generator with residual transposed convolution blocks trained with an adversarial loss. Quantitative and qualitative experimental results demonstrate the effectiveness of our approach over several depth estimation works.
Original languageEnglish
Article number1708
JournalSensors
Volume19
Issue number7
DOIs
Publication statusPublished - 10 Apr 2019
Externally publishedYes

Keywords

  • Depth estimation
  • Encoder-decoder networks
  • Generative adversarial networks

Fingerprint

Dive into the research topics of 'Single-Image Depth Inference Using Generative Adversarial Networks'. Together they form a unique fingerprint.

Cite this