Back to Search Start Over

Multi-view and Multi-modal Event Detection Utilizing Transformer-based Multi-sensor fusion

Authors :
Yasuda, Masahiro
Ohishi, Yasunori
Saito, Shoichiro
Harada, Noboru
Publication Year :
2022

Abstract

We tackle a challenging task: multi-view and multi-modal event detection that detects events in a wide-range real environment by utilizing data from distributed cameras and microphones and their weak labels. In this task, distributed sensors are utilized complementarily to capture events that are difficult to capture with a single sensor, such as a series of actions of people moving in an intricate room, or communication between people located far apart in a room. For sensors to cooperate effectively in such a situation, the system should be able to exchange information among sensors and combines information that is useful for identifying events in a complementary manner. For such a mechanism, we propose a Transformer-based multi-sensor fusion (MultiTrans) which combines multi-sensor data on the basis of the relationships between features of different viewpoints and modalities. In the experiments using a dataset newly collected for this task, our proposed method using MultiTrans improved the event detection performance and outperformed comparatives.<br />Comment: 5 pages, 5 figures, to appear in IEEE ICASSP 2022

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2202.09124
Document Type :
Working Paper