Back to Search Start Over

All-pairs Consistency Learning for Weakly Supervised Semantic Segmentation

Authors :
Sun, Weixuan
Zhang, Yanhao
Qin, Zhen
Liu, Zheyuan
Cheng, Lin
Wang, Fanyi
Zhong, Yiran
Barnes, Nick
Publication Year :
2023

Abstract

In this work, we propose a new transformer-based regularization to better localize objects for Weakly supervised semantic segmentation (WSSS). In image-level WSSS, Class Activation Map (CAM) is adopted to generate object localization as pseudo segmentation labels. To address the partial activation issue of the CAMs, consistency regularization is employed to maintain activation intensity invariance across various image augmentations. However, such methods ignore pair-wise relations among regions within each CAM, which capture context and should also be invariant across image views. To this end, we propose a new all-pairs consistency regularization (ACR). Given a pair of augmented views, our approach regularizes the activation intensities between a pair of augmented views, while also ensuring that the affinity across regions within each view remains consistent. We adopt vision transformers as the self-attention mechanism naturally embeds pair-wise affinity. This enables us to simply regularize the distance between the attention matrices of augmented image pairs. Additionally, we introduce a novel class-wise localization method that leverages the gradients of the class token. Our method can be seamlessly integrated into existing WSSS methods using transformers without modifying the architectures. We evaluate our method on PASCAL VOC and MS COCO datasets. Our method produces noticeably better class localization maps (67.3% mIoU on PASCAL VOC train), resulting in superior WSSS performances.<br />Comment: ICCV 2023 workshop, code released at: https://github.com/OpenNLPLab/ACR_WSSS

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.04321
Document Type :
Working Paper