Back to Search Start Over

Vis2Mus: Exploring Multimodal Representation Mapping for Controllable Music Generation

Authors :
Zhang, Runbang
Zhang, Yixiao
Shao, Kai
Shan, Ying
Xia, Gus
Publication Year :
2022

Abstract

In this study, we explore the representation mapping from the domain of visual arts to the domain of music, with which we can use visual arts as an effective handle to control music generation. Unlike most studies in multimodal representation learning that are purely data-driven, we adopt an analysis-by-synthesis approach that combines deep music representation learning with user studies. Such an approach enables us to discover \textit{interpretable} representation mapping without a huge amount of paired data. In particular, we discover that visual-to-music mapping has a nice property similar to equivariant. In other words, we can use various image transformations, say, changing brightness, changing contrast, style transfer, to control the corresponding transformations in the music domain. In addition, we released the Vis2Mus system as a controllable interface for symbolic music generation.<br />Comment: Submitted to ICASSP 2023. GitHub repo: https://github.com/ldzhangyx/vis2mus

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2211.05543
Document Type :
Working Paper