Back to Search
Start Over
MouseSIS: A Frames-and-Events Dataset for Space-Time Instance Segmentation of Mice
- Source :
- European Conference on Computer Vision (ECCV) Workshops, Milan, Italy 2024
- Publication Year :
- 2024
-
Abstract
- Enabled by large annotated datasets, tracking and segmentation of objects in videos has made remarkable progress in recent years. Despite these advancements, algorithms still struggle under degraded conditions and during fast movements. Event cameras are novel sensors with high temporal resolution and high dynamic range that offer promising advantages to address these challenges. However, annotated data for developing learning-based mask-level tracking algorithms with events is not available. To this end, we introduce: ($i$) a new task termed \emph{space-time instance segmentation}, similar to video instance segmentation, whose goal is to segment instances throughout the entire duration of the sensor input (here, the input are quasi-continuous events and optionally aligned frames); and ($ii$) \emph{\dname}, a dataset for the new task, containing aligned grayscale frames and events. It includes annotated ground-truth labels (pixel-level instance segmentation masks) of a group of up to seven freely moving and interacting mice. We also provide two reference methods, which show that leveraging event data can consistently improve tracking performance, especially when used in combination with conventional cameras. The results highlight the potential of event-aided tracking in difficult scenarios. We hope our dataset opens the field of event-based video instance segmentation and enables the development of robust tracking algorithms for challenging conditions.\url{https://github.com/tub-rip/MouseSIS}<br />Comment: 18 pages, 5 figures, ECCV Workshops
Details
- Database :
- arXiv
- Journal :
- European Conference on Computer Vision (ECCV) Workshops, Milan, Italy 2024
- Publication Type :
- Report
- Accession number :
- edsarx.2409.03358
- Document Type :
- Working Paper