Back to Search Start Over

Self-Prompting Perceptual Edge Learning for Dense Prediction

Authors :
Chen, Hao
Dong, Yonghan
Lu, Zhe-Ming
Yu, Yunlong
Han, Jungong
Source :
IEEE Transactions on Circuits and Systems for Video Technology; 2024, Vol. 34 Issue: 6 p4528-4541, 14p
Publication Year :
2024

Abstract

Numerous studies have employed prompt learning structures to enhance dense prediction tasks by integrating additional semantic or geometric information. While the inclusion of extra information has shown improvements in performance, it also poses challenges for applications that cannot provide extra input. To address this issue, this study evaluates the performance of different prompts and introduces an additional-input-free method, called self-prompting perceptual edge learning (SPPEL), which extracts edge-embedded semantic prompts directly from the image feature itself using trainable handcrafted edge operators within a plug-and-play module. To obtain the edge features, our approach incorporates an adversarial structure that compares the similarity between two edge features generated by the Hog and Kirsch operators, where the edge features are measured using multiplication, finetuned through a trainable all-one embedding, and enhanced with channel-to-channel attention. We conduct extensive evaluations of SPPEL on 7 tasks, utilizing 7 different backbones and applying 5 distinct methods. Our experimental results demonstrate that SPPEL achieves strong competitiveness in various settings with an average improvement of 1.7% across all 7 tasks, including ADE20K, COCO (Instance Segmentation), COCO (Object Detection), Pascal VOC2012, STARE, CHASE DB1, and HRF, while incurring a parameter increase of less than 3% (the detailed computation analysis of parameters and Gflops are shown in different experimental tables). Code will be released at: <uri>https://github.com/chenhao-zju/sppel</uri>

Details

Language :
English
ISSN :
10518215 and 15582205
Volume :
34
Issue :
6
Database :
Supplemental Index
Journal :
IEEE Transactions on Circuits and Systems for Video Technology
Publication Type :
Periodical
Accession number :
ejs66588471
Full Text :
https://doi.org/10.1109/TCSVT.2023.3340740