Back to Search Start Over

Learning to Rank Patches for Unbiased Image Redundancy Reduction

Authors :
Luo, Yang
Chen, Zhineng
Zhou, Peng
Wu, Zuxuan
Gao, Xieping
Jiang, Yu-Gang
Publication Year :
2024

Abstract

Images suffer from heavy spatial redundancy because pixels in neighboring regions are spatially correlated. Existing approaches strive to overcome this limitation by reducing less meaningful image regions. However, current leading methods rely on supervisory signals. They may compel models to preserve content that aligns with labeled categories and discard content belonging to unlabeled categories. This categorical inductive bias makes these methods less effective in real-world scenarios. To address this issue, we propose a self-supervised framework for image redundancy reduction called Learning to Rank Patches (LTRP). We observe that image reconstruction of masked image modeling models is sensitive to the removal of visible patches when the masking ratio is high (e.g., 90\%). Building upon it, we implement LTRP via two steps: inferring the semantic density score of each patch by quantifying variation between reconstructions with and without this patch, and learning to rank the patches with the pseudo score. The entire process is self-supervised, thus getting out of the dilemma of categorical inductive bias. We design extensive experiments on different datasets and tasks. The results demonstrate that LTRP outperforms both supervised and other self-supervised methods due to the fair assessment of image content.<br />Comment: Accepted by CVPR 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.00680
Document Type :
Working Paper