Back to Search Start Over

GenMM: Geometrically and Temporally Consistent Multimodal Data Generation for Video and LiDAR

Authors :
Singh, Bharat
Kulharia, Viveka
Yang, Luyu
Ravichandran, Avinash
Tyagi, Ambrish
Shrivastava, Ashish
Publication Year :
2024

Abstract

Multimodal synthetic data generation is crucial in domains such as autonomous driving, robotics, augmented/virtual reality, and retail. We propose a novel approach, GenMM, for jointly editing RGB videos and LiDAR scans by inserting temporally and geometrically consistent 3D objects. Our method uses a reference image and 3D bounding boxes to seamlessly insert and blend new objects into target videos. We inpaint the 2D Regions of Interest (consistent with 3D boxes) using a diffusion-based video inpainting model. We then compute semantic boundaries of the object and estimate it's surface depth using state-of-the-art semantic segmentation and monocular depth estimation techniques. Subsequently, we employ a geometry-based optimization algorithm to recover the 3D shape of the object's surface, ensuring it fits precisely within the 3D bounding box. Finally, LiDAR rays intersecting with the new object surface are updated to reflect consistent depths with its geometry. Our experiments demonstrate the effectiveness of GenMM in inserting various 3D objects across video and LiDAR modalities.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.10722
Document Type :
Working Paper