Back to Search
Start Over
Infrared and visible image fusion based on dilated residual attention network
- Source :
- Optik. 224:165409
- Publication Year :
- 2020
- Publisher :
- Elsevier BV, 2020.
-
Abstract
- In recent years, deep learning (DL)-based techniques have achieved significant improvements over image fusion applications. Yet, current DL-based approaches raise formidable feature extraction, computational and statistical challenges in image fusion models. To overcome these challenges, we proposed an end-to-end DL-based architecture for infrared (IR) and visible (VIS) image fusion. We introduce multi-scale feature extraction and self-attention-based new feature fusion strategy to generate a high-quality fused image having balance details of IR and VIS modalities. Specifically, instead of using normal convolutions, we introduce dilated convolutions in the encoders to extract multi-scale features of IR and VIS images. Additionally, we introduce self-attention mechanism to refine and adaptively fuse multi-contextual features of IR and VIS images. Fused image is generated via decoder of the network. Extensive qualitative and quantitative evaluations on a benchmark dataset illustrate that our proposed method achieves reasonable performance over other state-of-the-art and current CNN-based image fusion methods.
- Subjects :
- Image fusion
Computer science
business.industry
Deep learning
Feature extraction
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
Pattern recognition
Residual
Atomic and Molecular Physics, and Optics
Electronic, Optical and Magnetic Materials
Image (mathematics)
Benchmark (computing)
Artificial intelligence
Electrical and Electronic Engineering
business
Encoder
Subjects
Details
- ISSN :
- 00304026
- Volume :
- 224
- Database :
- OpenAIRE
- Journal :
- Optik
- Accession number :
- edsair.doi...........4b8289b9bbe39ec177161320aab7fc3d
- Full Text :
- https://doi.org/10.1016/j.ijleo.2020.165409