Back to Search
Start Over
Scene description with context information using dense-LSTM.
- Source :
-
Journal of Intelligent & Fuzzy Systems . 2023, Vol. 44 Issue 5, p7553-7565. 13p. - Publication Year :
- 2023
-
Abstract
- Generating natural language description for visual content is a technique for describing the content available in the image(s). It requires knowledge of both the domains of computer vision and natural language processing. For this, various models with different approaches are suggested. One of them is encoder-decoder-based description generation. Existing papers used only objects for descriptions, but the relationship between them is equally essential, requiring context information. Which required techniques like Long Short-Term Memory (LSTM). This paper proposes an encoder-decoder-based methodology to generate human-like textual descriptions. Dense-LSTM is presented for better description as a decoder with a modified VGG19 encoder to capture information to describe the scene. Standard datasets Flickr8K and Flickr30k are used for testing and training purposes. BLEU (Bilingual Evaluation Understudy) score is used to evaluate the generated text. For the proposed model, a GUI (Graphical User Interface) is developed, which produces the audio description of the output received and provides an interface for searching the related visual content and query-based search. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 10641246
- Volume :
- 44
- Issue :
- 5
- Database :
- Academic Search Index
- Journal :
- Journal of Intelligent & Fuzzy Systems
- Publication Type :
- Academic Journal
- Accession number :
- 164007986
- Full Text :
- https://doi.org/10.3233/JIFS-222358