Back to Search Start Over

Display-Semantic Transformer for Scene Text Recognition

Authors :
Xinqi Yang
Wushour Silamu
Miaomiao Xu
Yanbing Li
Source :
Sensors, Vol 23, Iss 19, p 8159 (2023)
Publication Year :
2023
Publisher :
MDPI AG, 2023.

Abstract

Linguistic knowledge helps a lot in scene text recognition by providing semantic information to refine the character sequence. The visual model only focuses on the visual texture of characters without actively learning linguistic information, which leads to poor model recognition rates in some noisy (distorted and blurry, etc.) images. In order to address the aforementioned issues, this study builds upon the most recent findings of the Vision Transformer, and our approach (called Display-Semantic Transformer, or DST for short) constructs a masked language model and a semantic visual interaction module. The model can mine deep semantic information from images to assist scene text recognition and improve the robustness of the model. The semantic visual interaction module can better realize the interaction between semantic information and visual features. In this way, the visual features can be enhanced by the semantic information so that the model can achieve a better recognition effect. The experimental results show that our model improves the average recognition accuracy on six benchmark test sets by nearly 2% compared to the baseline. Our model retains the benefits of having a small number of parameters and allows for fast inference speed. Additionally, it attains a more optimal balance between accuracy and speed.

Details

Language :
English
ISSN :
14248220
Volume :
23
Issue :
19
Database :
Directory of Open Access Journals
Journal :
Sensors
Publication Type :
Academic Journal
Accession number :
edsdoj.8db8185e432d4fef8a72d1360a3bea92
Document Type :
article
Full Text :
https://doi.org/10.3390/s23198159