Back to Search Start Over

Reading scene text with fully convolutional sequence modeling.

Authors :
Gao, Yunze
Chen, Yingying
Wang, Jinqiao
Tang, Ming
Lu, Hanqing
Source :
Neurocomputing. Apr2019, Vol. 339, p161-170. 10p.
Publication Year :
2019

Abstract

Abstract Reading text in the wild is a challenging task in computer vision. Existing approaches mainly adopt connectionist temporal classification (CTC) or attention models based on recurrent neural network (RNN), and are computationally expensive and hard to train. In this paper, instead of the chain structure of RNN, we propose an end-to-end fully convolutional network with the stacked convolutional layers to effectively capture the long-term dependencies among elements of scene text image. The stacked convolutional layers are much more efficient than bidirectional long short-term memory (BLSTM) in modeling the contextual dependency. In addition, we design a discriminative feature encoder by incorporating the residual attention blocks into a small densely connected network to enhance the foreground text and suppress the background noise. Extensive experiments on seven standard benchmarks, the Street View Text, IIIT5K, ICDAR03, ICDAR13, ICDAR15, COCO-Text and Total-Text, validate that our method not only achieves state-of-the-art or highly competitive recognition performance, but significantly improves the efficiency and reduces the number of parameters as well. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
339
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
135351472
Full Text :
https://doi.org/10.1016/j.neucom.2019.01.094