Back to Search Start Over

Convolutional neural networks for object detection in aerial imagery for disaster response and recovery.

Authors :
Pi, Yalong
Nath, Nipun D.
Behzadan, Amir H.
Source :
Advanced Engineering Informatics. Jan2020, Vol. 43, pN.PAG-N.PAG. 1p.
Publication Year :
2020

Abstract

• Successful disaster management requires data describing disaster impact and extent of damage. • Aerial data collection is resource-intensive and requires offline post-processing. • We introduce convolutional neural network (CNN) models for ground object detection. • The models recognize building roofs, cars, vegetation, debris, and flooded areas. • We achieve 80.69% and 74.48% mAP for high and low altitude footage, respectively. Accurate and timely access to data describing disaster impact and extent of damage is key to successful disaster management (a process that includes prevention, mitigation, preparedness, response, and recovery). Airborne data acquisition using helicopter and unmanned aerial vehicle (UAV) helps obtain a bird's-eye view of disaster-affected areas. However, a major challenge to this approach is robustly processing a large amount of data to identify and map objects of interest on the ground in real-time. The current process is resource-intensive (must be carried out manually) and requires offline computing (through post-processing of aerial videos). This research introduces and evaluates a series of convolutional neural network (CNN) models for ground object detection from aerial views of disaster's aftermath. These models are capable of recognizing critical ground assets including building roofs (both damaged and undamaged), vehicles, vegetation, debris, and flooded areas. The CNN models are trained on an in-house aerial video dataset (named Volan2018) that is created using web mining techniques. Volan2018 contains eight annotated aerial videos (65,580 frames) collected by drone or helicopter from eight different locations in various hurricanes that struck the United States in 2017–2018. Eight CNN models based on You-Only-Look-Once (YOLO) algorithm are trained by transfer learning, i.e., pre-trained on the COCO/VOC dataset and re-trained on Volan2018 dataset, and achieve 80.69% mAP for high altitude (helicopter footage) and 74.48% for low altitude (drone footage), respectively. This paper also presents a thorough investigation of the effect of camera altitude, data balance, and pre-trained weights on model performance, and finds that models trained and tested on videos taken from similar altitude outperform those trained and tested on videos taken from different altitudes. Moreover, the CNN model pre-trained on the VOC dataset and re-trained on balanced drone video yields the best result in significantly shorter training time. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14740346
Volume :
43
Database :
Academic Search Index
Journal :
Advanced Engineering Informatics
Publication Type :
Academic Journal
Accession number :
141983957
Full Text :
https://doi.org/10.1016/j.aei.2019.101009