Back to Search Start Over

Language-Guided Self-Supervised Video Summarization Using Text Semantic Matching Considering the Diversity of the Video

Authors :
Sugihara, Tomoya
Masuda, Shuntaro
Xiao, Ling
Yamasaki, Toshihiko
Publication Year :
2024

Abstract

Current video summarization methods rely heavily on supervised computer vision techniques, which demands time-consuming and subjective manual annotations. To overcome these limitations, we investigated self-supervised video summarization. Inspired by the success of Large Language Models (LLMs), we explored the feasibility in transforming the video summarization task into a Natural Language Processing (NLP) task. By leveraging the advantages of LLMs in context understanding, we aim to enhance the effectiveness of self-supervised video summarization. Our method begins by generating captions for individual video frames, which are then synthesized into text summaries by LLMs. Subsequently, we measure semantic distance between the captions and the text summary. Notably, we propose a novel loss function to optimize our model according to the diversity of the video. Finally, the summarized video can be generated by selecting the frames with captions similar to the text summary. Our method achieves state-of-the-art performance on the SumMe dataset in rank correlation coefficients. In addition, our method has a novel feature of being able to achieve personalized summarization.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2405.08890
Document Type :
Working Paper