Back to Search Start Over

Lung segmentation based on a deep learning approach for dynamic chest radiography

Authors :
Yuki Kitahara
Holger R. Roth
Kensaku Mori
Isao Matsumoto
Hirohisa Oda
Kazuo Kasahara
Rie Tanaka
Source :
Medical Imaging: Computer-Aided Diagnosis
Publication Year :
2019
Publisher :
SPIE, 2019.

Abstract

The purpose of this study was to develop a lung segmentation based on a deep learning approach for dynamic chest radiography, and to assess the clinical utility for pulmonary function assessment. Maximum inhale and exhale images were selected in dynamic chest radiographs of 214 cases, comprising 150 images during respiration. In total, 534 images (2 to 4 images per case) with annotations were prepared for this study. Three hundred images were fed into a fullyconvolutional neural network (FCNN) architecture to train a deep learning model for lung segmentation, and 234 images were used for testing. To reduce misrecognition of the lung, post processing methods on the basis of time-series information were applied to the resulting images. The change rate of the lung area was calculated throughout all frames and its clinical utility was assessed in patients with pulmonary diseases. The Sorenson-Dice coefficients between the segmentation results and the gold standard were 0.94 in inhale and 0.95 in exhale phases, respectively. There were some false recognitions (214/234), but 163 were eliminated by our post processing. The measurement of the lung area and its respiratory change were useful for the evaluation of lung conditions; prolonged expiration in obstructive pulmonary diseases could be detected as a reduced change rate of the lung area in the exhale phase. Semantic segmentation deep learning approach allows for the sequential lung segmentation of dynamic chest radiographs with high accuracy (94%) and is useful for the evaluation of pulmonary function.

Details

Database :
OpenAIRE
Journal :
Medical Imaging 2019: Computer-Aided Diagnosis
Accession number :
edsair.doi...........45046368af7bbc00b3cd11f2cb82c20e
Full Text :
https://doi.org/10.1117/12.2512711