Back to Search Start Over

A Deep Structured Model for Video Captioning

Authors :
Somula, Ramasubbareddy
Vinodhini, V.
Sathiyabhama, B.
Sankar, S.
Source :
International Journal of Gaming and Computer-Mediated Simulations; April 2020, Vol. 12 Issue: 2 p44-56, 13p
Publication Year :
2020

Abstract

Video captions help people to understand in a noisy environment or when the sound is muted. It helps people having impaired hearing to understand much better. Captions not only support the content creators and translators but also boost the search engine optimization. Many advanced areas like computer vision and human-computer interaction play a vital role as there is a successful growth of deep learning techniques. Numerous surveys on deep learning models are evolved with different methods, architecture, and metrics. Working with video subtitles is still challenging in terms of activity recognition in video. This paper proposes a deep structured model that is effective towards activity recognition, automatically classifies and caption it in a single architecture. The first process includes subtracting the foreground from the background; this is done by building a 3D convolutional neural network (CNN) model. A Gaussian mixture model is used to remove the backdrop. The classification is done using long short-term memory networks (LSTM). A hidden Markov model (HMM) is used to generate the high quality data. Next, it uses the nonlinear activation function to perform the normalization process. Finally, the video captioning is achieved by using natural language.

Details

Language :
English
ISSN :
19423888 and 19423896
Volume :
12
Issue :
2
Database :
Supplemental Index
Journal :
International Journal of Gaming and Computer-Mediated Simulations
Publication Type :
Periodical
Accession number :
ejs54070988
Full Text :
https://doi.org/10.4018/IJGCMS.2020040103