Back to Search Start Over

A Study on Generative Models for Visual Recognition of Unknown Scenes Using a Textual Description

Authors :
Jose Martinez-Carranza
Delia Irazú Hernández-Farías
Victoria Eugenia Vazquez-Meza
Leticia Oyuki Rojas-Perez
Aldrich Alfredo Cabrera-Ponce
Source :
Sensors, Vol 23, Iss 21, p 8757 (2023)
Publication Year :
2023
Publisher :
MDPI AG, 2023.

Abstract

In this study, we investigate the application of generative models to assist artificial agents, such as delivery drones or service robots, in visualising unfamiliar destinations solely based on textual descriptions. We explore the use of generative models, such as Stable Diffusion, and embedding representations, such as CLIP and VisualBERT, to compare generated images obtained from textual descriptions of target scenes with images of those scenes. Our research encompasses three key strategies: image generation, text generation, and text enhancement, the latter involving tools such as ChatGPT to create concise textual descriptions for evaluation. The findings of this study contribute to an understanding of the impact of combining generative tools with multi-modal embedding representations to enhance the artificial agent’s ability to recognise unknown scenes. Consequently, we assert that this research holds broad applications, particularly in drone parcel delivery, where an aerial robot can employ text descriptions to identify a destination. Furthermore, this concept can also be applied to other service robots tasked with delivering to unfamiliar locations, relying exclusively on user-provided textual descriptions.

Details

Language :
English
ISSN :
14248220
Volume :
23
Issue :
21
Database :
Directory of Open Access Journals
Journal :
Sensors
Publication Type :
Academic Journal
Accession number :
edsdoj.f4dafc596104df1922f56af79f0f670
Document Type :
article
Full Text :
https://doi.org/10.3390/s23218757