Back to Search Start Over

Enlightening Low-Light Images With Dynamic Guidance for Context Enrichment.

Authors :
Zhu, Lingyu
Yang, Wenhan
Chen, Baoliang
Lu, Fangbo
Wang, Shiqi
Source :
IEEE Transactions on Circuits & Systems for Video Technology; Aug2022, Vol. 32 Issue 8, p5068-5079, 12p
Publication Year :
2022

Abstract

Images acquired in low-light conditions suffer from a series of visual quality degradations, e.g., low visibility, degraded contrast, and intensive noise. These complicated degradations based on various contexts (e.g., noise in smooth regions, over-exposure in well-exposed regions and low contrast around edges) cast major challenges to the low-light image enhancement. Herein, we propose a new methodology by imposing a learnable guidance map from the signal and deep priors, making the deep neural network adaptively enhance low-light images in a region-dependent manner. The enhancement capability of the learnable guidance map is further exploited with the multi-scale dilated context collaboration, leading to contextually enriched feature representations extracted by the model with various receptive fields. Through assimilating the intrinsic perceptual information from the learned guidance map, richer and more realistic textures are generated. Extensive experiments on real low-light images demonstrate the effectiveness of our method, which delivers superior results quantitatively and qualitatively. The code is available at https://github.com/lingyzhu0101/GEMSC to facilitate future research. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
10518215
Volume :
32
Issue :
8
Database :
Complementary Index
Journal :
IEEE Transactions on Circuits & Systems for Video Technology
Publication Type :
Academic Journal
Accession number :
158333569
Full Text :
https://doi.org/10.1109/TCSVT.2022.3146731