Back to Search Start Over

Text2Video: An End-to-end Learning Framework for Expressing Text With Videos.

Authors :
Yang, Xiaoshan
Zhang, Tianzhu
Xu, Changsheng
Source :
IEEE Transactions on Multimedia; Sep2018, Vol. 20 Issue 9, p2360-2370, 11p
Publication Year :
2018

Abstract

Video creation is a challenging and highly profession-al task that generally involves substantial manual efforts. To ease this burden, a better approach is to automatically produce new videos based on clips from the massive amount of existing videos according to arbitrary text. In this paper, we formulate video creation as a problem of retrieving a sequence of videos for a sentence stream. To achieve this goal, we propose a novel multimodal recurrent architecture for automatic video production. Compared with existing methods, the proposed model has three major advantages. First, it is the first completely integrated end-to-end deep learning system for real-world production to the best of our knowledge. We are among the first to address the problem of retrieving a sequence of videos for a sentence stream. Second, it can effectively exploit the correspondence between sentences and video clips through semantic consistency modeling. Third, it can model the visual coherence well by requiring that the produced videos should be organized coherently in terms of visual appearance. We have conducted extensive experiments on two applications, including video retrieval and video composition. The qualitative and quantitative results obtained on two public datasets used in the Large Scale Movie Description Challenge 2016 both demonstrate the effectiveness of the proposed model compared with other state-of-the-art algorithms. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15209210
Volume :
20
Issue :
9
Database :
Complementary Index
Journal :
IEEE Transactions on Multimedia
Publication Type :
Academic Journal
Accession number :
131288834
Full Text :
https://doi.org/10.1109/TMM.2018.2807588