Back to Search Start Over

Video Infringement Detection via Feature Disentanglement and Mutual Information Maximization

Authors :
Liu, Zhenguang
Yu, Xinyang
Wang, Ruili
Ye, Shuai
Ma, Zhe
Dong, Jianfeng
He, Sifeng
Qian, Feng
Zhang, Xiaobo
Zimmermann, Roger
Yang, Lei
Publication Year :
2023

Abstract

The self-media era provides us tremendous high quality videos. Unfortunately, frequent video copyright infringements are now seriously damaging the interests and enthusiasm of video creators. Identifying infringing videos is therefore a compelling task. Current state-of-the-art methods tend to simply feed high-dimensional mixed video features into deep neural networks and count on the networks to extract useful representations. Despite its simplicity, this paradigm heavily relies on the original entangled features and lacks constraints guaranteeing that useful task-relevant semantics are extracted from the features. In this paper, we seek to tackle the above challenges from two aspects: (1) We propose to disentangle an original high-dimensional feature into multiple sub-features, explicitly disentangling the feature into exclusive lower-dimensional components. We expect the sub-features to encode non-overlapping semantics of the original feature and remove redundant information. (2) On top of the disentangled sub-features, we further learn an auxiliary feature to enhance the sub-features. We theoretically analyzed the mutual information between the label and the disentangled features, arriving at a loss that maximizes the extraction of task-relevant information from the original feature. Extensive experiments on two large-scale benchmark datasets (i.e., SVD and VCSL) demonstrate that our method achieves 90.1% TOP-100 mAP on the large-scale SVD dataset and also sets the new state-of-the-art on the VCSL benchmark dataset. Our code and model have been released at https://github.com/yyyooooo/DMI/, hoping to contribute to the community.<br />Comment: This paper is accepted by ACM MM 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2309.06877
Document Type :
Working Paper