Back to Search
Start Over
A method for image–text matching based on semantic filtering and adaptive adjustment
- Source :
- EURASIP Journal on Image and Video Processing, Vol 2024, Iss 1, Pp 1-17 (2024)
- Publication Year :
- 2024
- Publisher :
- SpringerOpen, 2024.
-
Abstract
- Abstract As image–text matching (a critical task in the field of computer vision) links cross-modal data, it has captured extensive attention. Most of the existing methods intended for matching images and texts explore the local similarity levels between images and sentences to align images with texts. Even though this fine-grained approach has remarkable gains, how to further mine the deep semantics between data pairs and focus on the essential semantics in data remains to be quested. In this work, a new semantic filtering and adaptive approach (FAAR) was proposed to ease the above problem. To be specific, the filtered attention (FA) module selectively focuses on typical alignments with the interference of meaningless comparisons eliminated. Next, the adaptive regulator (AR) further adjusts the attention weights of key segments for filtered regions and words. The superiority of our proposed method was validated by a number of qualitative experiments and analyses on the Flickr30K and MSCOCO data sets.
Details
- Language :
- English
- ISSN :
- 16875281
- Volume :
- 2024
- Issue :
- 1
- Database :
- Directory of Open Access Journals
- Journal :
- EURASIP Journal on Image and Video Processing
- Publication Type :
- Academic Journal
- Accession number :
- edsdoj.4f0f584438e34f98a7c74827eeb1443c
- Document Type :
- article
- Full Text :
- https://doi.org/10.1186/s13640-024-00639-y