Back to Search
Start Over
Captioning Transformer with Stacked Attention Modules.
- Source :
- Applied Sciences (2076-3417); May2018, Vol. 8 Issue 5, p739, 11p
- Publication Year :
- 2018
-
Abstract
- Image captioning is a challenging task. Meanwhile, it is important for the machine to understand the meaning of an image better. In recent years, the image captioning usually use the long-short-term-memory (LSTM) as the decoder to generate the sentence, and these models show excellent performance. Although the LSTM can memorize dependencies, the LSTM structure has complicated and inherently sequential across time problems. To address these issues, recent works have shown benefits of the Transformer for machine translation. Inspired by their success, we develop a Captioning Transformer (CT) model with stacked attention modules. We attempt to introduce the Transformer to the image captioning task. The CT model contains only attention modules without the dependencies of the time. It not only can memorize dependencies between the sequence but also can be trained in parallel. Moreover, we propose the multi-level supervision to make the Transformer achieve better performance. Extensive experiments are carried out on the challenging MSCOCO dataset and the proposed Captioning Transformer achieves competitive performance compared with some state-of-the-art methods. [ABSTRACT FROM AUTHOR]
- Subjects :
- SHORT-term memory
IMAGE processing
Subjects
Details
- Language :
- English
- ISSN :
- 20763417
- Volume :
- 8
- Issue :
- 5
- Database :
- Complementary Index
- Journal :
- Applied Sciences (2076-3417)
- Publication Type :
- Academic Journal
- Accession number :
- 129829665
- Full Text :
- https://doi.org/10.3390/app8050739