Back to Search Start Over

Blind image quality prediction by exploiting multi-level deep representations.

Authors :
Gao, Fei
Yu, Jun
Zhu, Suguo
Huang, Qingming
Tian, Qi
Source :
Pattern Recognition. Sep2018, Vol. 81, p432-442. 11p.
Publication Year :
2018

Abstract

Blind image quality assessment (BIQA) aims at precisely estimating human perceived image quality with no access to a reference. Recently, several attempts have been made to develop BIQA methods based on deep neural networks (DNNs). Although these methods obtained promising performance, they have some limitations: (1) their DNN models are actually ”shallow” in term of depth; and (2) these methods typically use the output of the last layer in the DNN model as the feature representation for quality prediction. Since the representation depth has been demonstrated beneficial for various vision tasks, it is significant to explore very deep networks for learning BIQA models. Besides, the information in the last layer may unduly generalize over local artifacts which are highly related to quality degradation. On the contrary, intermediate layers may be sensitive to local degradations but will not capture high-level semantics. Thus, reasoning at multiple levels of representation is necessary in the IQA task. In this paper, we propose to extract multi-level representations from a very deep DNN model for learning an effective BIQA model, and consequently present a simple but extraordinarily effective BIQA framework, codenamed BLINDER ( BLind Image quality predictioN via multi-level DEep Representations ). Thorough experiments have been conducted on five standard databases, which show that a significant improvement can be achieved by adopting multi-level deep representations. Besides, BLINDER considerably outperforms previous state-of-the-art BIQA methods for authentically distorted images. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00313203
Volume :
81
Database :
Academic Search Index
Journal :
Pattern Recognition
Publication Type :
Academic Journal
Accession number :
129791739
Full Text :
https://doi.org/10.1016/j.patcog.2018.04.016