Deep Learning with Weak Supervision for Disaster Scene Description in Low-Altitude Imagery

Maria Presa-Reyes, Yudong Tao, Shu Ching Chen, Mei Ling Shyu

Research output: Contribution to journalArticlepeer-review


Pictures or videos captured from a low-altitude aircraft or an unmanned aerial vehicle are a fast and cost-effective way to survey the affected scene for the quick and precise assessment of a catastrophic event’s impacts and damages. Using advanced techniques such as deep learning, it is now possible to automate the description of disaster scenes and identify features in captured images or recorded videos to gain situational awareness. However, building a large-scale, high-quality dataset with annotated disaster-related features for supervised model training is time-consuming and costly. In this paper, we propose a weakly-supervised approach to train a deep neural network on low-altitude imagery with highly imbalanced and noisy crowd-sourced labels. We further make use of the rich spatio-temporal data obtained from the pictures and its sequence information to enhance the model’s performance during training via label propagation. Our approach achieves the highest score among all the submitted runs in the TRECVID2020 Disaster Scene Description and Indexing (DSDI) Challenge, indicating its superior capabilities in retrieving disaster-related video clips compared to other proposed methods.

Original languageEnglish (US)
JournalIEEE Transactions on Geoscience and Remote Sensing
StateAccepted/In press - 2021


  • Annotations
  • Convolutional neural networks
  • Convolutional Neural Networks
  • Deep learning
  • Deep Learning
  • Disaster Scene Description
  • Noise measurement
  • Training
  • Training data
  • Videos
  • Weak Supervision

ASJC Scopus subject areas

  • Electrical and Electronic Engineering
  • Earth and Planetary Sciences(all)


Dive into the research topics of 'Deep Learning with Weak Supervision for Disaster Scene Description in Low-Altitude Imagery'. Together they form a unique fingerprint.

Cite this