Loading

Deep Learning for Understanding Satellite Imagery: An Experimental Survey

Sharada Prasanna Mohanty Jakub Czakon Kamil A. Kaczmarek Andrzej Pyskir
Piotr Tarasiewicz Saket Kunwar Janick Rohrbach Dave Luo
Manjunath Prasad Sascha Fleer Jan Philip Göpfert Akshat Tandon Guillaume Mollard Nikhil Rayaprolu Marcel Salathe Malte Schilling
DATE PUBLISHED
16 Nov 2020

Citations
0

Abstract

Translating satellite imagery into maps requires intensive effort and time, especially leading to inaccurate maps of the affected regions during disaster and conflict. The combination of availability of recent datasets and advances in computer vision made through deep learning paved the way toward automated satellite image translation. To facilitate research in this direction, we introduce the Satellite Imagery Competition using a modified SpaceNet dataset. Participants had to come up with different segmentation models to detect positions of buildings on satellite images. In this work, we present five approaches based on improvements of U-Net and Mask R-Convolutional Neuronal Networks models, coupled with unique training adaptations using boosting algorithms, morphological filter, Conditional Random Fields and custom losses. The good results—as high as AP=0.937 and AR=0.959—from these models demonstrate the feasibility of Deep Learning in automated satellite image annotation.

Back to AIcrowd Research