1. Guided Attention Inference Network.
- Author
-
Li, Kunpeng, Wu, Ziyan, Peng, Kuan-Chuan, Ernst, Jan, and Fu, Yun
- Subjects
- *
CONVOLUTIONAL neural networks , *ARTIFICIAL neural networks , *MACHINE learning , *TEMPORAL lobe - Abstract
With only coarse labels, weakly supervised learning typically uses top-down attention maps generated by back-propagating gradients as priors for tasks such as object localization and semantic segmentation. While these attention maps are intuitive and informative explanations of deep neural network, there is no effective mechanism to manipulate the network attention during learning process. In this paper, we address three shortcomings of previous approaches in modeling such attention maps in one common framework. First, we make attention maps a natural and explicit component in the training pipeline such that they are end-to-end trainable. Moreover, we provide self-guidance directly on these maps by exploring supervision from the network itself to improve them towards specific target tasks. Lastly, we proposed a design to seamlessly bridge the gap between using weak and extra supervision if available. Despite its simplicity, experiments on the semantic segmentation task demonstrate the effectiveness of our methods. Besides, the proposed framework provides a way not only explaining the focus of the learner but also feeding back with direct guidance towards specific tasks. Under mild assumptions our method can also be understood as a plug-in to existing convolutional neural networks to improve their generalization performance. [ABSTRACT FROM AUTHOR]
- Published
- 2020
- Full Text
- View/download PDF